00:00:00.000  Started by upstream project "autotest-per-patch" build number 132401
00:00:00.000  originally caused by:
00:00:00.000   Started by user sys_sgci
00:00:00.076  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.077  The recommended git tool is: git
00:00:00.077  using credential 00000000-0000-0000-0000-000000000002
00:00:00.079   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.130  Fetching changes from the remote Git repository
00:00:00.131   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.204  Using shallow fetch with depth 1
00:00:00.204  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.204   > git --version # timeout=10
00:00:00.295   > git --version # 'git version 2.39.2'
00:00:00.295  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.322  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.322   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:03.884   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:03.895   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:03.907  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:03.907   > git config core.sparsecheckout # timeout=10
00:00:03.920   > git read-tree -mu HEAD # timeout=10
00:00:03.936   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:03.955  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:03.956   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:04.052  [Pipeline] Start of Pipeline
00:00:04.066  [Pipeline] library
00:00:04.067  Loading library shm_lib@master
00:00:04.067  Library shm_lib@master is cached. Copying from home.
00:00:04.082  [Pipeline] node
00:00:04.088  Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest
00:00:04.089  [Pipeline] {
00:00:04.096  [Pipeline] catchError
00:00:04.097  [Pipeline] {
00:00:04.108  [Pipeline] wrap
00:00:04.115  [Pipeline] {
00:00:04.123  [Pipeline] stage
00:00:04.125  [Pipeline] { (Prologue)
00:00:04.139  [Pipeline] echo
00:00:04.140  Node: VM-host-SM9
00:00:04.145  [Pipeline] cleanWs
00:00:04.153  [WS-CLEANUP] Deleting project workspace...
00:00:04.153  [WS-CLEANUP] Deferred wipeout is used...
00:00:04.161  [WS-CLEANUP] done
00:00:04.337  [Pipeline] setCustomBuildProperty
00:00:04.403  [Pipeline] httpRequest
00:00:04.872  [Pipeline] echo
00:00:04.874  Sorcerer 10.211.164.20 is alive
00:00:04.882  [Pipeline] retry
00:00:04.884  [Pipeline] {
00:00:04.891  [Pipeline] httpRequest
00:00:04.894  HttpMethod: GET
00:00:04.895  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:04.895  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:04.911  Response Code: HTTP/1.1 200 OK
00:00:04.911  Success: Status code 200 is in the accepted range: 200,404
00:00:04.912  Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.277  [Pipeline] }
00:00:10.294  [Pipeline] // retry
00:00:10.304  [Pipeline] sh
00:00:10.590  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.607  [Pipeline] httpRequest
00:00:10.980  [Pipeline] echo
00:00:10.983  Sorcerer 10.211.164.20 is alive
00:00:10.996  [Pipeline] retry
00:00:10.998  [Pipeline] {
00:00:11.016  [Pipeline] httpRequest
00:00:11.021  HttpMethod: GET
00:00:11.022  URL: http://10.211.164.20/packages/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz
00:00:11.022  Sending request to url: http://10.211.164.20/packages/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz
00:00:11.026  Response Code: HTTP/1.1 200 OK
00:00:11.027  Success: Status code 200 is in the accepted range: 200,404
00:00:11.027  Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz
00:01:23.634  [Pipeline] }
00:01:23.672  [Pipeline] // retry
00:01:23.681  [Pipeline] sh
00:01:23.956  + tar --no-same-owner -xf spdk_5c8d9922304f954f9b9612f124a8d7bc5102ca33.tar.gz
00:01:27.250  [Pipeline] sh
00:01:27.531  + git -C spdk log --oneline -n5
00:01:27.531  5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function
00:01:27.531  d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io
00:01:27.531  32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true
00:01:27.531  d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option
00:01:27.531  b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options
00:01:27.554  [Pipeline] writeFile
00:01:27.571  [Pipeline] sh
00:01:27.854  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:27.867  [Pipeline] sh
00:01:28.148  + cat autorun-spdk.conf
00:01:28.148  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:28.148  SPDK_TEST_NVME=1
00:01:28.148  SPDK_TEST_FTL=1
00:01:28.148  SPDK_TEST_ISAL=1
00:01:28.148  SPDK_RUN_ASAN=1
00:01:28.148  SPDK_RUN_UBSAN=1
00:01:28.148  SPDK_TEST_XNVME=1
00:01:28.148  SPDK_TEST_NVME_FDP=1
00:01:28.148  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:28.155  RUN_NIGHTLY=0
00:01:28.157  [Pipeline] }
00:01:28.171  [Pipeline] // stage
00:01:28.189  [Pipeline] stage
00:01:28.192  [Pipeline] { (Run VM)
00:01:28.207  [Pipeline] sh
00:01:28.488  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:28.488  + echo 'Start stage prepare_nvme.sh'
00:01:28.488  Start stage prepare_nvme.sh
00:01:28.488  + [[ -n 5 ]]
00:01:28.488  + disk_prefix=ex5
00:01:28.488  + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]]
00:01:28.488  + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]]
00:01:28.488  + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf
00:01:28.488  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:28.488  ++ SPDK_TEST_NVME=1
00:01:28.488  ++ SPDK_TEST_FTL=1
00:01:28.488  ++ SPDK_TEST_ISAL=1
00:01:28.488  ++ SPDK_RUN_ASAN=1
00:01:28.488  ++ SPDK_RUN_UBSAN=1
00:01:28.488  ++ SPDK_TEST_XNVME=1
00:01:28.488  ++ SPDK_TEST_NVME_FDP=1
00:01:28.488  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:28.488  ++ RUN_NIGHTLY=0
00:01:28.488  + cd /var/jenkins/workspace/nvme-vg-autotest
00:01:28.488  + nvme_files=()
00:01:28.488  + declare -A nvme_files
00:01:28.488  + backend_dir=/var/lib/libvirt/images/backends
00:01:28.488  + nvme_files['nvme.img']=5G
00:01:28.488  + nvme_files['nvme-cmb.img']=5G
00:01:28.488  + nvme_files['nvme-multi0.img']=4G
00:01:28.488  + nvme_files['nvme-multi1.img']=4G
00:01:28.488  + nvme_files['nvme-multi2.img']=4G
00:01:28.488  + nvme_files['nvme-openstack.img']=8G
00:01:28.488  + nvme_files['nvme-zns.img']=5G
00:01:28.488  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:28.488  + ((  SPDK_TEST_FTL == 1  ))
00:01:28.488  + nvme_files["nvme-ftl.img"]=6G
00:01:28.488  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:28.488  + nvme_files["nvme-fdp.img"]=1G
00:01:28.488  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:28.488  + for nvme in "${!nvme_files[@]}"
00:01:28.488  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G
00:01:28.488  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:28.488  + for nvme in "${!nvme_files[@]}"
00:01:28.488  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G
00:01:28.747  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc
00:01:28.747  + for nvme in "${!nvme_files[@]}"
00:01:28.747  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G
00:01:28.747  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:28.747  + for nvme in "${!nvme_files[@]}"
00:01:28.747  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G
00:01:28.747  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:28.747  + for nvme in "${!nvme_files[@]}"
00:01:28.747  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G
00:01:28.747  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:28.747  + for nvme in "${!nvme_files[@]}"
00:01:28.747  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G
00:01:29.006  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:29.006  + for nvme in "${!nvme_files[@]}"
00:01:29.006  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G
00:01:29.264  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:29.264  + for nvme in "${!nvme_files[@]}"
00:01:29.264  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G
00:01:29.264  Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc
00:01:29.264  + for nvme in "${!nvme_files[@]}"
00:01:29.264  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G
00:01:29.523  Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:29.523  ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu
00:01:29.523  + echo 'End stage prepare_nvme.sh'
00:01:29.523  End stage prepare_nvme.sh
00:01:29.534  [Pipeline] sh
00:01:29.815  + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:29.815  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39
00:01:29.815  
00:01:29.815  DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant
00:01:29.815  SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk
00:01:29.815  VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest
00:01:29.815  HELP=0
00:01:29.815  DRY_RUN=0
00:01:29.815  NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,
00:01:29.815  NVME_DISKS_TYPE=nvme,nvme,nvme,nvme,
00:01:29.815  NVME_AUTO_CREATE=0
00:01:29.815  NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,,
00:01:29.815  NVME_CMB=,,,,
00:01:29.815  NVME_PMR=,,,,
00:01:29.815  NVME_ZNS=,,,,
00:01:29.815  NVME_MS=true,,,,
00:01:29.815  NVME_FDP=,,,on,
00:01:29.815  SPDK_VAGRANT_DISTRO=fedora39
00:01:29.815  SPDK_VAGRANT_VMCPU=10
00:01:29.815  SPDK_VAGRANT_VMRAM=12288
00:01:29.815  SPDK_VAGRANT_PROVIDER=libvirt
00:01:29.815  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:01:29.815  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:29.815  SPDK_OPENSTACK_NETWORK=0
00:01:29.815  VAGRANT_PACKAGE_BOX=0
00:01:29.815  VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:29.815  FORCE_DISTRO=true
00:01:29.815  VAGRANT_BOX_VERSION=
00:01:29.815  EXTRA_VAGRANTFILES=
00:01:29.815  NIC_MODEL=e1000
00:01:29.815  
00:01:29.815  mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt'
00:01:29.815  /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest
00:01:34.006  Bringing machine 'default' up with 'libvirt' provider...
00:01:34.006  ==> default: Creating image (snapshot of base box volume).
00:01:34.265  ==> default: Creating domain with the following settings...
00:01:34.265  ==> default:  -- Name:              fedora39-39-1.5-1721788873-2326_default_1732111932_e3256edb96a066ce1223
00:01:34.265  ==> default:  -- Domain type:       kvm
00:01:34.265  ==> default:  -- Cpus:              10
00:01:34.265  ==> default:  -- Feature:           acpi
00:01:34.265  ==> default:  -- Feature:           apic
00:01:34.265  ==> default:  -- Feature:           pae
00:01:34.265  ==> default:  -- Memory:            12288M
00:01:34.265  ==> default:  -- Memory Backing:    hugepages: 
00:01:34.265  ==> default:  -- Management MAC:    
00:01:34.265  ==> default:  -- Loader:            
00:01:34.265  ==> default:  -- Nvram:             
00:01:34.265  ==> default:  -- Base box:          spdk/fedora39
00:01:34.265  ==> default:  -- Storage pool:      default
00:01:34.265  ==> default:  -- Image:             /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732111932_e3256edb96a066ce1223.img (20G)
00:01:34.265  ==> default:  -- Volume Cache:      default
00:01:34.265  ==> default:  -- Kernel:            
00:01:34.265  ==> default:  -- Initrd:            
00:01:34.265  ==> default:  -- Graphics Type:     vnc
00:01:34.265  ==> default:  -- Graphics Port:     -1
00:01:34.265  ==> default:  -- Graphics IP:       127.0.0.1
00:01:34.265  ==> default:  -- Graphics Password: Not defined
00:01:34.265  ==> default:  -- Video Type:        cirrus
00:01:34.265  ==> default:  -- Video VRAM:        9216
00:01:34.265  ==> default:  -- Sound Type:	
00:01:34.265  ==> default:  -- Keymap:            en-us
00:01:34.265  ==> default:  -- TPM Path:          
00:01:34.265  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:34.265  ==> default:  -- Command line args: 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:01:34.265  ==> default:     -> value=-drive, 
00:01:34.265  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 
00:01:34.265  ==> default:     -> value=-drive, 
00:01:34.265  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 
00:01:34.265  ==> default:     -> value=-drive, 
00:01:34.265  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:34.265  ==> default:     -> value=-drive, 
00:01:34.265  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:34.265  ==> default:     -> value=-drive, 
00:01:34.265  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 
00:01:34.265  ==> default:     -> value=-drive, 
00:01:34.265  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 
00:01:34.265  ==> default:     -> value=-device, 
00:01:34.265  ==> default:     -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:34.265  ==> default: Creating shared folders metadata...
00:01:34.265  ==> default: Starting domain.
00:01:35.641  ==> default: Waiting for domain to get an IP address...
00:01:53.727  ==> default: Waiting for SSH to become available...
00:01:55.104  ==> default: Configuring and enabling network interfaces...
00:01:59.366      default: SSH address: 192.168.121.218:22
00:01:59.366      default: SSH username: vagrant
00:01:59.366      default: SSH auth method: private key
00:02:01.269  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:02:09.386  ==> default: Mounting SSHFS shared folder...
00:02:11.367  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output
00:02:11.367  ==> default: Checking Mount..
00:02:12.304  ==> default: Folder Successfully Mounted!
00:02:12.304  ==> default: Running provisioner: file...
00:02:13.240      default: ~/.gitconfig => .gitconfig
00:02:13.499  
00:02:13.499    SUCCESS!
00:02:13.499  
00:02:13.499    cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use.
00:02:13.499    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:02:13.499    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm.
00:02:13.499  
00:02:13.508  [Pipeline] }
00:02:13.523  [Pipeline] // stage
00:02:13.532  [Pipeline] dir
00:02:13.533  Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt
00:02:13.534  [Pipeline] {
00:02:13.546  [Pipeline] catchError
00:02:13.548  [Pipeline] {
00:02:13.560  [Pipeline] sh
00:02:13.840  + vagrant ssh-config --host vagrant
00:02:13.840  + sed -ne /^Host/,$p
00:02:13.840  + tee ssh_conf
00:02:18.068  Host vagrant
00:02:18.068    HostName 192.168.121.218
00:02:18.068    User vagrant
00:02:18.068    Port 22
00:02:18.068    UserKnownHostsFile /dev/null
00:02:18.068    StrictHostKeyChecking no
00:02:18.068    PasswordAuthentication no
00:02:18.068    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39
00:02:18.068    IdentitiesOnly yes
00:02:18.068    LogLevel FATAL
00:02:18.068    ForwardAgent yes
00:02:18.068    ForwardX11 yes
00:02:18.068  
00:02:18.081  [Pipeline] withEnv
00:02:18.084  [Pipeline] {
00:02:18.097  [Pipeline] sh
00:02:18.375  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:02:18.375  		source /etc/os-release
00:02:18.375  		[[ -e /image.version ]] && img=$(< /image.version)
00:02:18.375  		# Minimal, systemd-like check.
00:02:18.375  		if [[ -e /.dockerenv ]]; then
00:02:18.375  			# Clear garbage from the node's name:
00:02:18.375  			#  agt-er_autotest_547-896 -> autotest_547-896
00:02:18.375  			#  $HOSTNAME is the actual container id
00:02:18.375  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:02:18.375  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:02:18.375  				# We can assume this is a mount from a host where container is running,
00:02:18.375  				# so fetch its hostname to easily identify the target swarm worker.
00:02:18.375  				container="$(< /etc/hostname) ($agent)"
00:02:18.375  			else
00:02:18.375  				# Fallback
00:02:18.375  				container=$agent
00:02:18.375  			fi
00:02:18.375  		fi
00:02:18.375  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:02:18.375  
00:02:18.387  [Pipeline] }
00:02:18.404  [Pipeline] // withEnv
00:02:18.413  [Pipeline] setCustomBuildProperty
00:02:18.428  [Pipeline] stage
00:02:18.430  [Pipeline] { (Tests)
00:02:18.447  [Pipeline] sh
00:02:18.727  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:02:18.743  [Pipeline] sh
00:02:19.026  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:02:19.297  [Pipeline] timeout
00:02:19.298  Timeout set to expire in 50 min
00:02:19.300  [Pipeline] {
00:02:19.316  [Pipeline] sh
00:02:19.598  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:02:20.161  HEAD is now at 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function
00:02:20.173  [Pipeline] sh
00:02:20.453  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:02:20.749  [Pipeline] sh
00:02:21.079  + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:02:21.384  [Pipeline] sh
00:02:21.666  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo
00:02:21.923  ++ readlink -f spdk_repo
00:02:21.923  + DIR_ROOT=/home/vagrant/spdk_repo
00:02:21.923  + [[ -n /home/vagrant/spdk_repo ]]
00:02:21.923  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:02:21.923  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:02:21.923  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:02:21.923  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:02:21.923  + [[ -d /home/vagrant/spdk_repo/output ]]
00:02:21.924  + [[ nvme-vg-autotest == pkgdep-* ]]
00:02:21.924  + cd /home/vagrant/spdk_repo
00:02:21.924  + source /etc/os-release
00:02:21.924  ++ NAME='Fedora Linux'
00:02:21.924  ++ VERSION='39 (Cloud Edition)'
00:02:21.924  ++ ID=fedora
00:02:21.924  ++ VERSION_ID=39
00:02:21.924  ++ VERSION_CODENAME=
00:02:21.924  ++ PLATFORM_ID=platform:f39
00:02:21.924  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:02:21.924  ++ ANSI_COLOR='0;38;2;60;110;180'
00:02:21.924  ++ LOGO=fedora-logo-icon
00:02:21.924  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:02:21.924  ++ HOME_URL=https://fedoraproject.org/
00:02:21.924  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:02:21.924  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:02:21.924  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:02:21.924  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:02:21.924  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:02:21.924  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:02:21.924  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:02:21.924  ++ SUPPORT_END=2024-11-12
00:02:21.924  ++ VARIANT='Cloud Edition'
00:02:21.924  ++ VARIANT_ID=cloud
00:02:21.924  + uname -a
00:02:21.924  Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:02:21.924  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:02:22.182  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:02:22.440  Hugepages
00:02:22.440  node     hugesize     free /  total
00:02:22.440  node0   1048576kB        0 /      0
00:02:22.440  node0      2048kB        0 /      0
00:02:22.440  
00:02:22.440  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:22.441  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:02:22.441  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:02:22.441  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:02:22.441  NVMe                      0000:00:12.0    1b36   0010   unknown nvme             nvme2      nvme2n1 nvme2n2 nvme2n3
00:02:22.699  NVMe                      0000:00:13.0    1b36   0010   unknown nvme             nvme3      nvme3n1
00:02:22.699  + rm -f /tmp/spdk-ld-path
00:02:22.699  + source autorun-spdk.conf
00:02:22.699  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:22.699  ++ SPDK_TEST_NVME=1
00:02:22.699  ++ SPDK_TEST_FTL=1
00:02:22.699  ++ SPDK_TEST_ISAL=1
00:02:22.699  ++ SPDK_RUN_ASAN=1
00:02:22.699  ++ SPDK_RUN_UBSAN=1
00:02:22.699  ++ SPDK_TEST_XNVME=1
00:02:22.699  ++ SPDK_TEST_NVME_FDP=1
00:02:22.699  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:22.699  ++ RUN_NIGHTLY=0
00:02:22.699  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:22.699  + [[ -n '' ]]
00:02:22.699  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:02:22.699  + for M in /var/spdk/build-*-manifest.txt
00:02:22.699  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:02:22.699  + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/
00:02:22.699  + for M in /var/spdk/build-*-manifest.txt
00:02:22.699  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:22.699  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:02:22.699  + for M in /var/spdk/build-*-manifest.txt
00:02:22.699  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:22.699  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:02:22.699  ++ uname
00:02:22.699  + [[ Linux == \L\i\n\u\x ]]
00:02:22.699  + sudo dmesg -T
00:02:22.699  + sudo dmesg --clear
00:02:22.699  + dmesg_pid=5294
00:02:22.699  + sudo dmesg -Tw
00:02:22.699  + [[ Fedora Linux == FreeBSD ]]
00:02:22.699  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:22.699  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:22.699  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:22.699  + [[ -x /usr/src/fio-static/fio ]]
00:02:22.699  + export FIO_BIN=/usr/src/fio-static/fio
00:02:22.699  + FIO_BIN=/usr/src/fio-static/fio
00:02:22.699  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:22.699  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:22.699  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:22.699  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:22.699  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:02:22.699  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:22.699  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:22.699  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:02:22.699  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:22.699    14:13:01  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:22.699   14:13:01  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:22.699    14:13:01  -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0
00:02:22.699   14:13:01  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:02:22.699   14:13:01  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:22.957     14:13:01  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:22.957    14:13:01  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:02:22.957     14:13:01  -- scripts/common.sh@15 -- $ shopt -s extglob
00:02:22.957     14:13:01  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:22.957     14:13:01  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:22.957     14:13:01  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:22.957      14:13:01  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.957      14:13:01  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.958      14:13:01  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.958      14:13:01  -- paths/export.sh@5 -- $ export PATH
00:02:22.958      14:13:01  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:02:22.958    14:13:01  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:02:22.958      14:13:01  -- common/autobuild_common.sh@493 -- $ date +%s
00:02:22.958     14:13:01  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732111981.XXXXXX
00:02:22.958    14:13:01  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732111981.0RPSIC
00:02:22.958    14:13:01  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:02:22.958    14:13:01  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:02:22.958    14:13:01  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:02:22.958    14:13:01  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:02:22.958    14:13:01  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:02:22.958     14:13:01  -- common/autobuild_common.sh@509 -- $ get_config_params
00:02:22.958     14:13:01  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:02:22.958     14:13:01  -- common/autotest_common.sh@10 -- $ set +x
00:02:22.958    14:13:01  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme'
00:02:22.958    14:13:01  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:02:22.958    14:13:01  -- pm/common@17 -- $ local monitor
00:02:22.958    14:13:01  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:22.958    14:13:01  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:22.958    14:13:01  -- pm/common@25 -- $ sleep 1
00:02:22.958     14:13:01  -- pm/common@21 -- $ date +%s
00:02:22.958     14:13:01  -- pm/common@21 -- $ date +%s
00:02:22.958    14:13:01  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732111981
00:02:22.958    14:13:01  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732111981
00:02:22.958  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732111981_collect-vmstat.pm.log
00:02:22.958  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732111981_collect-cpu-load.pm.log
00:02:23.895    14:13:02  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:02:23.895   14:13:02  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:23.895   14:13:02  -- spdk/autobuild.sh@12 -- $ umask 022
00:02:23.895   14:13:02  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:23.895   14:13:02  -- spdk/autobuild.sh@16 -- $ date -u
00:02:23.895  Wed Nov 20 02:13:02 PM UTC 2024
00:02:23.895   14:13:02  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:23.895  v25.01-pre-225-g5c8d99223
00:02:23.895   14:13:02  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:02:23.895   14:13:02  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:02:23.896   14:13:02  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:23.896   14:13:02  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:23.896   14:13:02  -- common/autotest_common.sh@10 -- $ set +x
00:02:23.896  ************************************
00:02:23.896  START TEST asan
00:02:23.896  ************************************
00:02:23.896  using asan
00:02:23.896   14:13:02 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:02:23.896  
00:02:23.896  real	0m0.000s
00:02:23.896  user	0m0.000s
00:02:23.896  sys	0m0.000s
00:02:23.896   14:13:02 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:23.896  ************************************
00:02:23.896  END TEST asan
00:02:23.896  ************************************
00:02:23.896   14:13:02 asan -- common/autotest_common.sh@10 -- $ set +x
00:02:23.896   14:13:02  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:02:23.896   14:13:02  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:02:23.896   14:13:02  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:23.896   14:13:02  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:23.896   14:13:02  -- common/autotest_common.sh@10 -- $ set +x
00:02:23.896  ************************************
00:02:23.896  START TEST ubsan
00:02:23.896  ************************************
00:02:23.896  using ubsan
00:02:23.896   14:13:02 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:02:23.896  
00:02:23.896  real	0m0.000s
00:02:23.896  user	0m0.000s
00:02:23.896  sys	0m0.000s
00:02:23.896   14:13:02 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:23.896   14:13:02 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:02:23.896  ************************************
00:02:23.896  END TEST ubsan
00:02:23.896  ************************************
00:02:23.896   14:13:02  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:02:23.896   14:13:02  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:23.896   14:13:02  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:23.896   14:13:02  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:23.896   14:13:02  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:23.896   14:13:02  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:02:23.896   14:13:02  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:02:23.896   14:13:02  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:02:23.896   14:13:02  -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared
00:02:24.156  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:02:24.156  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:24.414  Using 'verbs' RDMA provider
00:02:37.568  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:02:52.445  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:02:52.445  Creating mk/config.mk...done.
00:02:52.445  Creating mk/cc.flags.mk...done.
00:02:52.445  Type 'make' to build.
00:02:52.445   14:13:29  -- spdk/autobuild.sh@70 -- $ run_test make make -j10
00:02:52.445   14:13:29  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:52.445   14:13:29  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:52.445   14:13:29  -- common/autotest_common.sh@10 -- $ set +x
00:02:52.445  ************************************
00:02:52.445  START TEST make
00:02:52.445  ************************************
00:02:52.445   14:13:29 make -- common/autotest_common.sh@1129 -- $ make -j10
00:02:52.445  (cd /home/vagrant/spdk_repo/spdk/xnvme && \
00:02:52.445  	export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \
00:02:52.445  	meson setup builddir \
00:02:52.445  	-Dwith-libaio=enabled \
00:02:52.445  	-Dwith-liburing=enabled \
00:02:52.445  	-Dwith-libvfn=disabled \
00:02:52.445  	-Dwith-spdk=disabled \
00:02:52.445  	-Dexamples=false \
00:02:52.445  	-Dtests=false \
00:02:52.445  	-Dtools=false && \
00:02:52.445  	meson compile -C builddir && \
00:02:52.445  	cd -)
00:02:52.445  make[1]: Nothing to be done for 'all'.
00:02:53.821  The Meson build system
00:02:53.821  Version: 1.5.0
00:02:53.821  Source dir: /home/vagrant/spdk_repo/spdk/xnvme
00:02:53.821  Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir
00:02:53.821  Build type: native build
00:02:53.821  Project name: xnvme
00:02:53.821  Project version: 0.7.5
00:02:53.821  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:53.821  C linker for the host machine: cc ld.bfd 2.40-14
00:02:53.821  Host machine cpu family: x86_64
00:02:53.821  Host machine cpu: x86_64
00:02:53.821  Message: host_machine.system: linux
00:02:53.821  Compiler for C supports arguments -Wno-missing-braces: YES 
00:02:53.821  Compiler for C supports arguments -Wno-cast-function-type: YES 
00:02:53.821  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:02:53.821  Run-time dependency threads found: YES
00:02:53.821  Has header "setupapi.h" : NO 
00:02:53.821  Has header "linux/blkzoned.h" : YES 
00:02:53.821  Has header "linux/blkzoned.h" : YES (cached)
00:02:53.821  Has header "libaio.h" : YES 
00:02:53.821  Library aio found: YES
00:02:53.821  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:53.821  Run-time dependency liburing found: YES 2.2
00:02:53.821  Dependency libvfn skipped: feature with-libvfn disabled
00:02:53.821  Found CMake: /usr/bin/cmake (3.27.7)
00:02:53.821  Run-time dependency libisal found: NO (tried pkgconfig and cmake)
00:02:53.821  Subproject spdk : skipped: feature with-spdk disabled
00:02:53.821  Run-time dependency appleframeworks found: NO (tried framework)
00:02:53.821  Run-time dependency appleframeworks found: NO (tried framework)
00:02:53.821  Library rt found: YES
00:02:53.821  Checking for function "clock_gettime" with dependency -lrt: YES 
00:02:53.821  Configuring xnvme_config.h using configuration
00:02:53.821  Configuring xnvme.spec using configuration
00:02:53.821  Run-time dependency bash-completion found: YES 2.11
00:02:53.821  Message: Bash-completions: /usr/share/bash-completion/completions
00:02:53.821  Program cp found: YES (/usr/bin/cp)
00:02:53.821  Build targets in project: 3
00:02:53.821  
00:02:53.821  xnvme 0.7.5
00:02:53.821  
00:02:53.821    Subprojects
00:02:53.821      spdk         : NO Feature 'with-spdk' disabled
00:02:53.821  
00:02:53.821    User defined options
00:02:53.821      examples     : false
00:02:53.821      tests        : false
00:02:53.821      tools        : false
00:02:53.821      with-libaio  : enabled
00:02:53.821      with-liburing: enabled
00:02:53.821      with-libvfn  : disabled
00:02:53.821      with-spdk    : disabled
00:02:53.821  
00:02:53.821  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:54.754  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir'
00:02:54.754  [1/76] Generating toolbox/xnvme-driver-script with a custom command
00:02:54.754  [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o
00:02:54.754  [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o
00:02:54.754  [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o
00:02:54.754  [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o
00:02:54.754  [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o
00:02:54.754  [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o
00:02:54.754  [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o
00:02:54.754  [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o
00:02:54.754  [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o
00:02:54.754  [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o
00:02:55.013  [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o
00:02:55.013  [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o
00:02:55.013  [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o
00:02:55.013  [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o
00:02:55.013  [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o
00:02:55.013  [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o
00:02:55.013  [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o
00:02:55.013  [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o
00:02:55.013  [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o
00:02:55.013  [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o
00:02:55.013  [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o
00:02:55.013  [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o
00:02:55.013  [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o
00:02:55.013  [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o
00:02:55.013  [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o
00:02:55.013  [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o
00:02:55.013  [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o
00:02:55.013  [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o
00:02:55.013  [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o
00:02:55.272  [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o
00:02:55.272  [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o
00:02:55.272  [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o
00:02:55.272  [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o
00:02:55.272  [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o
00:02:55.272  [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o
00:02:55.272  [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o
00:02:55.272  [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o
00:02:55.272  [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o
00:02:55.272  [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o
00:02:55.272  [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o
00:02:55.272  [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o
00:02:55.272  [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o
00:02:55.272  [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o
00:02:55.272  [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o
00:02:55.272  [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o
00:02:55.272  [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o
00:02:55.272  [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o
00:02:55.272  [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o
00:02:55.272  [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o
00:02:55.272  [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o
00:02:55.272  [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o
00:02:55.272  [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o
00:02:55.272  [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o
00:02:55.531  [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o
00:02:55.531  [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o
00:02:55.531  [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o
00:02:55.531  [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o
00:02:55.531  [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o
00:02:55.531  [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o
00:02:55.531  [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o
00:02:55.531  [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o
00:02:55.531  [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o
00:02:55.531  [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o
00:02:55.789  [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o
00:02:55.789  [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o
00:02:55.789  [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o
00:02:55.789  [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o
00:02:55.789  [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o
00:02:55.789  [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o
00:02:55.789  [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o
00:02:55.789  [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o
00:02:55.789  [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o
00:02:56.355  [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o
00:02:56.355  [75/76] Linking static target lib/libxnvme.a
00:02:56.355  [76/76] Linking target lib/libxnvme.so.0.7.5
00:02:56.355  INFO: autodetecting backend as ninja
00:02:56.355  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir
00:02:56.612  /home/vagrant/spdk_repo/spdk/xnvmebuild
00:03:08.812  The Meson build system
00:03:08.812  Version: 1.5.0
00:03:08.812  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:03:08.812  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:03:08.812  Build type: native build
00:03:08.812  Program cat found: YES (/usr/bin/cat)
00:03:08.812  Project name: DPDK
00:03:08.812  Project version: 24.03.0
00:03:08.812  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:03:08.812  C linker for the host machine: cc ld.bfd 2.40-14
00:03:08.812  Host machine cpu family: x86_64
00:03:08.812  Host machine cpu: x86_64
00:03:08.812  Message: ## Building in Developer Mode ##
00:03:08.812  Program pkg-config found: YES (/usr/bin/pkg-config)
00:03:08.812  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:03:08.812  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:03:08.812  Program python3 found: YES (/usr/bin/python3)
00:03:08.812  Program cat found: YES (/usr/bin/cat)
00:03:08.812  Compiler for C supports arguments -march=native: YES 
00:03:08.812  Checking for size of "void *" : 8 
00:03:08.812  Checking for size of "void *" : 8 (cached)
00:03:08.812  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:03:08.812  Library m found: YES
00:03:08.812  Library numa found: YES
00:03:08.812  Has header "numaif.h" : YES 
00:03:08.812  Library fdt found: NO
00:03:08.812  Library execinfo found: NO
00:03:08.812  Has header "execinfo.h" : YES 
00:03:08.812  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:03:08.812  Run-time dependency libarchive found: NO (tried pkgconfig)
00:03:08.812  Run-time dependency libbsd found: NO (tried pkgconfig)
00:03:08.812  Run-time dependency jansson found: NO (tried pkgconfig)
00:03:08.812  Run-time dependency openssl found: YES 3.1.1
00:03:08.812  Run-time dependency libpcap found: YES 1.10.4
00:03:08.812  Has header "pcap.h" with dependency libpcap: YES 
00:03:08.812  Compiler for C supports arguments -Wcast-qual: YES 
00:03:08.812  Compiler for C supports arguments -Wdeprecated: YES 
00:03:08.812  Compiler for C supports arguments -Wformat: YES 
00:03:08.812  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:03:08.812  Compiler for C supports arguments -Wformat-security: NO 
00:03:08.812  Compiler for C supports arguments -Wmissing-declarations: YES 
00:03:08.812  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:03:08.812  Compiler for C supports arguments -Wnested-externs: YES 
00:03:08.812  Compiler for C supports arguments -Wold-style-definition: YES 
00:03:08.812  Compiler for C supports arguments -Wpointer-arith: YES 
00:03:08.812  Compiler for C supports arguments -Wsign-compare: YES 
00:03:08.812  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:03:08.812  Compiler for C supports arguments -Wundef: YES 
00:03:08.812  Compiler for C supports arguments -Wwrite-strings: YES 
00:03:08.812  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:03:08.812  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:03:08.812  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:03:08.812  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:03:08.812  Program objdump found: YES (/usr/bin/objdump)
00:03:08.812  Compiler for C supports arguments -mavx512f: YES 
00:03:08.812  Checking if "AVX512 checking" compiles: YES 
00:03:08.812  Fetching value of define "__SSE4_2__" : 1 
00:03:08.812  Fetching value of define "__AES__" : 1 
00:03:08.812  Fetching value of define "__AVX__" : 1 
00:03:08.812  Fetching value of define "__AVX2__" : 1 
00:03:08.812  Fetching value of define "__AVX512BW__" : (undefined) 
00:03:08.812  Fetching value of define "__AVX512CD__" : (undefined) 
00:03:08.812  Fetching value of define "__AVX512DQ__" : (undefined) 
00:03:08.812  Fetching value of define "__AVX512F__" : (undefined) 
00:03:08.812  Fetching value of define "__AVX512VL__" : (undefined) 
00:03:08.812  Fetching value of define "__PCLMUL__" : 1 
00:03:08.812  Fetching value of define "__RDRND__" : 1 
00:03:08.812  Fetching value of define "__RDSEED__" : 1 
00:03:08.812  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:03:08.812  Fetching value of define "__znver1__" : (undefined) 
00:03:08.812  Fetching value of define "__znver2__" : (undefined) 
00:03:08.812  Fetching value of define "__znver3__" : (undefined) 
00:03:08.812  Fetching value of define "__znver4__" : (undefined) 
00:03:08.812  Library asan found: YES
00:03:08.812  Compiler for C supports arguments -Wno-format-truncation: YES 
00:03:08.812  Message: lib/log: Defining dependency "log"
00:03:08.812  Message: lib/kvargs: Defining dependency "kvargs"
00:03:08.812  Message: lib/telemetry: Defining dependency "telemetry"
00:03:08.812  Library rt found: YES
00:03:08.812  Checking for function "getentropy" : NO 
00:03:08.812  Message: lib/eal: Defining dependency "eal"
00:03:08.812  Message: lib/ring: Defining dependency "ring"
00:03:08.812  Message: lib/rcu: Defining dependency "rcu"
00:03:08.812  Message: lib/mempool: Defining dependency "mempool"
00:03:08.812  Message: lib/mbuf: Defining dependency "mbuf"
00:03:08.812  Fetching value of define "__PCLMUL__" : 1 (cached)
00:03:08.812  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:08.812  Compiler for C supports arguments -mpclmul: YES 
00:03:08.812  Compiler for C supports arguments -maes: YES 
00:03:08.813  Compiler for C supports arguments -mavx512f: YES (cached)
00:03:08.813  Compiler for C supports arguments -mavx512bw: YES 
00:03:08.813  Compiler for C supports arguments -mavx512dq: YES 
00:03:08.813  Compiler for C supports arguments -mavx512vl: YES 
00:03:08.813  Compiler for C supports arguments -mvpclmulqdq: YES 
00:03:08.813  Compiler for C supports arguments -mavx2: YES 
00:03:08.813  Compiler for C supports arguments -mavx: YES 
00:03:08.813  Message: lib/net: Defining dependency "net"
00:03:08.813  Message: lib/meter: Defining dependency "meter"
00:03:08.813  Message: lib/ethdev: Defining dependency "ethdev"
00:03:08.813  Message: lib/pci: Defining dependency "pci"
00:03:08.813  Message: lib/cmdline: Defining dependency "cmdline"
00:03:08.813  Message: lib/hash: Defining dependency "hash"
00:03:08.813  Message: lib/timer: Defining dependency "timer"
00:03:08.813  Message: lib/compressdev: Defining dependency "compressdev"
00:03:08.813  Message: lib/cryptodev: Defining dependency "cryptodev"
00:03:08.813  Message: lib/dmadev: Defining dependency "dmadev"
00:03:08.813  Compiler for C supports arguments -Wno-cast-qual: YES 
00:03:08.813  Message: lib/power: Defining dependency "power"
00:03:08.813  Message: lib/reorder: Defining dependency "reorder"
00:03:08.813  Message: lib/security: Defining dependency "security"
00:03:08.813  Has header "linux/userfaultfd.h" : YES 
00:03:08.813  Has header "linux/vduse.h" : YES 
00:03:08.813  Message: lib/vhost: Defining dependency "vhost"
00:03:08.813  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:03:08.813  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:03:08.813  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:03:08.813  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:03:08.813  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:03:08.813  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:03:08.813  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:03:08.813  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:03:08.813  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:03:08.813  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:03:08.813  Program doxygen found: YES (/usr/local/bin/doxygen)
00:03:08.813  Configuring doxy-api-html.conf using configuration
00:03:08.813  Configuring doxy-api-man.conf using configuration
00:03:08.813  Program mandb found: YES (/usr/bin/mandb)
00:03:08.813  Program sphinx-build found: NO
00:03:08.813  Configuring rte_build_config.h using configuration
00:03:08.813  Message: 
00:03:08.813  =================
00:03:08.813  Applications Enabled
00:03:08.813  =================
00:03:08.813  
00:03:08.813  apps:
00:03:08.813  	
00:03:08.813  
00:03:08.813  Message: 
00:03:08.813  =================
00:03:08.813  Libraries Enabled
00:03:08.813  =================
00:03:08.813  
00:03:08.813  libs:
00:03:08.813  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:03:08.813  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:03:08.813  	cryptodev, dmadev, power, reorder, security, vhost, 
00:03:08.813  
00:03:08.813  Message: 
00:03:08.813  ===============
00:03:08.813  Drivers Enabled
00:03:08.813  ===============
00:03:08.813  
00:03:08.813  common:
00:03:08.813  	
00:03:08.813  bus:
00:03:08.813  	pci, vdev, 
00:03:08.813  mempool:
00:03:08.813  	ring, 
00:03:08.813  dma:
00:03:08.813  	
00:03:08.813  net:
00:03:08.813  	
00:03:08.813  crypto:
00:03:08.813  	
00:03:08.813  compress:
00:03:08.813  	
00:03:08.813  vdpa:
00:03:08.813  	
00:03:08.813  
00:03:08.813  Message: 
00:03:08.813  =================
00:03:08.813  Content Skipped
00:03:08.813  =================
00:03:08.813  
00:03:08.813  apps:
00:03:08.813  	dumpcap:	explicitly disabled via build config
00:03:08.813  	graph:	explicitly disabled via build config
00:03:08.813  	pdump:	explicitly disabled via build config
00:03:08.813  	proc-info:	explicitly disabled via build config
00:03:08.813  	test-acl:	explicitly disabled via build config
00:03:08.813  	test-bbdev:	explicitly disabled via build config
00:03:08.813  	test-cmdline:	explicitly disabled via build config
00:03:08.813  	test-compress-perf:	explicitly disabled via build config
00:03:08.813  	test-crypto-perf:	explicitly disabled via build config
00:03:08.813  	test-dma-perf:	explicitly disabled via build config
00:03:08.813  	test-eventdev:	explicitly disabled via build config
00:03:08.813  	test-fib:	explicitly disabled via build config
00:03:08.813  	test-flow-perf:	explicitly disabled via build config
00:03:08.813  	test-gpudev:	explicitly disabled via build config
00:03:08.813  	test-mldev:	explicitly disabled via build config
00:03:08.813  	test-pipeline:	explicitly disabled via build config
00:03:08.813  	test-pmd:	explicitly disabled via build config
00:03:08.813  	test-regex:	explicitly disabled via build config
00:03:08.813  	test-sad:	explicitly disabled via build config
00:03:08.813  	test-security-perf:	explicitly disabled via build config
00:03:08.813  	
00:03:08.813  libs:
00:03:08.813  	argparse:	explicitly disabled via build config
00:03:08.813  	metrics:	explicitly disabled via build config
00:03:08.813  	acl:	explicitly disabled via build config
00:03:08.813  	bbdev:	explicitly disabled via build config
00:03:08.813  	bitratestats:	explicitly disabled via build config
00:03:08.813  	bpf:	explicitly disabled via build config
00:03:08.813  	cfgfile:	explicitly disabled via build config
00:03:08.813  	distributor:	explicitly disabled via build config
00:03:08.813  	efd:	explicitly disabled via build config
00:03:08.813  	eventdev:	explicitly disabled via build config
00:03:08.813  	dispatcher:	explicitly disabled via build config
00:03:08.813  	gpudev:	explicitly disabled via build config
00:03:08.813  	gro:	explicitly disabled via build config
00:03:08.813  	gso:	explicitly disabled via build config
00:03:08.813  	ip_frag:	explicitly disabled via build config
00:03:08.813  	jobstats:	explicitly disabled via build config
00:03:08.813  	latencystats:	explicitly disabled via build config
00:03:08.813  	lpm:	explicitly disabled via build config
00:03:08.813  	member:	explicitly disabled via build config
00:03:08.813  	pcapng:	explicitly disabled via build config
00:03:08.813  	rawdev:	explicitly disabled via build config
00:03:08.813  	regexdev:	explicitly disabled via build config
00:03:08.813  	mldev:	explicitly disabled via build config
00:03:08.813  	rib:	explicitly disabled via build config
00:03:08.813  	sched:	explicitly disabled via build config
00:03:08.813  	stack:	explicitly disabled via build config
00:03:08.813  	ipsec:	explicitly disabled via build config
00:03:08.813  	pdcp:	explicitly disabled via build config
00:03:08.813  	fib:	explicitly disabled via build config
00:03:08.813  	port:	explicitly disabled via build config
00:03:08.813  	pdump:	explicitly disabled via build config
00:03:08.813  	table:	explicitly disabled via build config
00:03:08.813  	pipeline:	explicitly disabled via build config
00:03:08.813  	graph:	explicitly disabled via build config
00:03:08.813  	node:	explicitly disabled via build config
00:03:08.813  	
00:03:08.813  drivers:
00:03:08.813  	common/cpt:	not in enabled drivers build config
00:03:08.813  	common/dpaax:	not in enabled drivers build config
00:03:08.813  	common/iavf:	not in enabled drivers build config
00:03:08.813  	common/idpf:	not in enabled drivers build config
00:03:08.813  	common/ionic:	not in enabled drivers build config
00:03:08.813  	common/mvep:	not in enabled drivers build config
00:03:08.813  	common/octeontx:	not in enabled drivers build config
00:03:08.813  	bus/auxiliary:	not in enabled drivers build config
00:03:08.813  	bus/cdx:	not in enabled drivers build config
00:03:08.813  	bus/dpaa:	not in enabled drivers build config
00:03:08.813  	bus/fslmc:	not in enabled drivers build config
00:03:08.813  	bus/ifpga:	not in enabled drivers build config
00:03:08.813  	bus/platform:	not in enabled drivers build config
00:03:08.813  	bus/uacce:	not in enabled drivers build config
00:03:08.813  	bus/vmbus:	not in enabled drivers build config
00:03:08.813  	common/cnxk:	not in enabled drivers build config
00:03:08.813  	common/mlx5:	not in enabled drivers build config
00:03:08.813  	common/nfp:	not in enabled drivers build config
00:03:08.813  	common/nitrox:	not in enabled drivers build config
00:03:08.813  	common/qat:	not in enabled drivers build config
00:03:08.813  	common/sfc_efx:	not in enabled drivers build config
00:03:08.813  	mempool/bucket:	not in enabled drivers build config
00:03:08.813  	mempool/cnxk:	not in enabled drivers build config
00:03:08.813  	mempool/dpaa:	not in enabled drivers build config
00:03:08.813  	mempool/dpaa2:	not in enabled drivers build config
00:03:08.813  	mempool/octeontx:	not in enabled drivers build config
00:03:08.813  	mempool/stack:	not in enabled drivers build config
00:03:08.813  	dma/cnxk:	not in enabled drivers build config
00:03:08.813  	dma/dpaa:	not in enabled drivers build config
00:03:08.813  	dma/dpaa2:	not in enabled drivers build config
00:03:08.813  	dma/hisilicon:	not in enabled drivers build config
00:03:08.813  	dma/idxd:	not in enabled drivers build config
00:03:08.813  	dma/ioat:	not in enabled drivers build config
00:03:08.813  	dma/skeleton:	not in enabled drivers build config
00:03:08.813  	net/af_packet:	not in enabled drivers build config
00:03:08.813  	net/af_xdp:	not in enabled drivers build config
00:03:08.813  	net/ark:	not in enabled drivers build config
00:03:08.813  	net/atlantic:	not in enabled drivers build config
00:03:08.813  	net/avp:	not in enabled drivers build config
00:03:08.813  	net/axgbe:	not in enabled drivers build config
00:03:08.813  	net/bnx2x:	not in enabled drivers build config
00:03:08.813  	net/bnxt:	not in enabled drivers build config
00:03:08.813  	net/bonding:	not in enabled drivers build config
00:03:08.813  	net/cnxk:	not in enabled drivers build config
00:03:08.813  	net/cpfl:	not in enabled drivers build config
00:03:08.813  	net/cxgbe:	not in enabled drivers build config
00:03:08.813  	net/dpaa:	not in enabled drivers build config
00:03:08.813  	net/dpaa2:	not in enabled drivers build config
00:03:08.813  	net/e1000:	not in enabled drivers build config
00:03:08.814  	net/ena:	not in enabled drivers build config
00:03:08.814  	net/enetc:	not in enabled drivers build config
00:03:08.814  	net/enetfec:	not in enabled drivers build config
00:03:08.814  	net/enic:	not in enabled drivers build config
00:03:08.814  	net/failsafe:	not in enabled drivers build config
00:03:08.814  	net/fm10k:	not in enabled drivers build config
00:03:08.814  	net/gve:	not in enabled drivers build config
00:03:08.814  	net/hinic:	not in enabled drivers build config
00:03:08.814  	net/hns3:	not in enabled drivers build config
00:03:08.814  	net/i40e:	not in enabled drivers build config
00:03:08.814  	net/iavf:	not in enabled drivers build config
00:03:08.814  	net/ice:	not in enabled drivers build config
00:03:08.814  	net/idpf:	not in enabled drivers build config
00:03:08.814  	net/igc:	not in enabled drivers build config
00:03:08.814  	net/ionic:	not in enabled drivers build config
00:03:08.814  	net/ipn3ke:	not in enabled drivers build config
00:03:08.814  	net/ixgbe:	not in enabled drivers build config
00:03:08.814  	net/mana:	not in enabled drivers build config
00:03:08.814  	net/memif:	not in enabled drivers build config
00:03:08.814  	net/mlx4:	not in enabled drivers build config
00:03:08.814  	net/mlx5:	not in enabled drivers build config
00:03:08.814  	net/mvneta:	not in enabled drivers build config
00:03:08.814  	net/mvpp2:	not in enabled drivers build config
00:03:08.814  	net/netvsc:	not in enabled drivers build config
00:03:08.814  	net/nfb:	not in enabled drivers build config
00:03:08.814  	net/nfp:	not in enabled drivers build config
00:03:08.814  	net/ngbe:	not in enabled drivers build config
00:03:08.814  	net/null:	not in enabled drivers build config
00:03:08.814  	net/octeontx:	not in enabled drivers build config
00:03:08.814  	net/octeon_ep:	not in enabled drivers build config
00:03:08.814  	net/pcap:	not in enabled drivers build config
00:03:08.814  	net/pfe:	not in enabled drivers build config
00:03:08.814  	net/qede:	not in enabled drivers build config
00:03:08.814  	net/ring:	not in enabled drivers build config
00:03:08.814  	net/sfc:	not in enabled drivers build config
00:03:08.814  	net/softnic:	not in enabled drivers build config
00:03:08.814  	net/tap:	not in enabled drivers build config
00:03:08.814  	net/thunderx:	not in enabled drivers build config
00:03:08.814  	net/txgbe:	not in enabled drivers build config
00:03:08.814  	net/vdev_netvsc:	not in enabled drivers build config
00:03:08.814  	net/vhost:	not in enabled drivers build config
00:03:08.814  	net/virtio:	not in enabled drivers build config
00:03:08.814  	net/vmxnet3:	not in enabled drivers build config
00:03:08.814  	raw/*:	missing internal dependency, "rawdev"
00:03:08.814  	crypto/armv8:	not in enabled drivers build config
00:03:08.814  	crypto/bcmfs:	not in enabled drivers build config
00:03:08.814  	crypto/caam_jr:	not in enabled drivers build config
00:03:08.814  	crypto/ccp:	not in enabled drivers build config
00:03:08.814  	crypto/cnxk:	not in enabled drivers build config
00:03:08.814  	crypto/dpaa_sec:	not in enabled drivers build config
00:03:08.814  	crypto/dpaa2_sec:	not in enabled drivers build config
00:03:08.814  	crypto/ipsec_mb:	not in enabled drivers build config
00:03:08.814  	crypto/mlx5:	not in enabled drivers build config
00:03:08.814  	crypto/mvsam:	not in enabled drivers build config
00:03:08.814  	crypto/nitrox:	not in enabled drivers build config
00:03:08.814  	crypto/null:	not in enabled drivers build config
00:03:08.814  	crypto/octeontx:	not in enabled drivers build config
00:03:08.814  	crypto/openssl:	not in enabled drivers build config
00:03:08.814  	crypto/scheduler:	not in enabled drivers build config
00:03:08.814  	crypto/uadk:	not in enabled drivers build config
00:03:08.814  	crypto/virtio:	not in enabled drivers build config
00:03:08.814  	compress/isal:	not in enabled drivers build config
00:03:08.814  	compress/mlx5:	not in enabled drivers build config
00:03:08.814  	compress/nitrox:	not in enabled drivers build config
00:03:08.814  	compress/octeontx:	not in enabled drivers build config
00:03:08.814  	compress/zlib:	not in enabled drivers build config
00:03:08.814  	regex/*:	missing internal dependency, "regexdev"
00:03:08.814  	ml/*:	missing internal dependency, "mldev"
00:03:08.814  	vdpa/ifc:	not in enabled drivers build config
00:03:08.814  	vdpa/mlx5:	not in enabled drivers build config
00:03:08.814  	vdpa/nfp:	not in enabled drivers build config
00:03:08.814  	vdpa/sfc:	not in enabled drivers build config
00:03:08.814  	event/*:	missing internal dependency, "eventdev"
00:03:08.814  	baseband/*:	missing internal dependency, "bbdev"
00:03:08.814  	gpu/*:	missing internal dependency, "gpudev"
00:03:08.814  	
00:03:08.814  
00:03:09.380  Build targets in project: 85
00:03:09.380  
00:03:09.380  DPDK 24.03.0
00:03:09.380  
00:03:09.380    User defined options
00:03:09.380      buildtype          : debug
00:03:09.380      default_library    : shared
00:03:09.380      libdir             : lib
00:03:09.380      prefix             : /home/vagrant/spdk_repo/spdk/dpdk/build
00:03:09.380      b_sanitize         : address
00:03:09.380      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:03:09.380      c_link_args        : 
00:03:09.380      cpu_instruction_set: native
00:03:09.380      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:03:09.380      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:03:09.380      enable_docs        : false
00:03:09.380      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:03:09.380      enable_kmods       : false
00:03:09.380      max_lcores         : 128
00:03:09.380      tests              : false
00:03:09.380  
00:03:09.380  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:03:10.315  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:03:10.316  [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:03:10.316  [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:03:10.575  [3/268] Linking static target lib/librte_kvargs.a
00:03:10.575  [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:03:10.575  [5/268] Linking static target lib/librte_log.a
00:03:10.575  [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:03:11.139  [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:03:11.397  [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:03:11.655  [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:03:11.655  [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:03:11.655  [11/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:03:11.914  [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:03:11.914  [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:03:11.914  [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:03:11.914  [15/268] Linking target lib/librte_log.so.24.1
00:03:11.914  [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:03:12.172  [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:03:12.172  [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:03:12.172  [19/268] Linking static target lib/librte_telemetry.a
00:03:12.430  [20/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:03:12.430  [21/268] Linking target lib/librte_kvargs.so.24.1
00:03:12.430  [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:03:12.997  [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:03:13.256  [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:03:13.256  [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:03:13.514  [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:03:13.514  [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:03:13.514  [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:03:13.514  [29/268] Linking target lib/librte_telemetry.so.24.1
00:03:13.514  [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:03:13.514  [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:03:13.514  [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:03:13.772  [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:03:13.772  [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:03:13.772  [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:03:14.031  [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:03:14.289  [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:03:14.856  [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:03:15.115  [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:03:15.115  [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:03:15.115  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:03:15.115  [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:03:15.115  [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:03:15.373  [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:03:15.373  [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:03:15.373  [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:03:15.631  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:03:15.631  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:03:15.890  [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:03:16.149  [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:03:16.149  [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:03:16.407  [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:03:16.665  [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:03:16.924  [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:03:16.924  [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:03:16.924  [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:03:16.924  [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:03:17.182  [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:03:17.440  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:03:17.440  [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:03:17.440  [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:03:17.698  [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:03:17.698  [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:03:18.264  [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:03:18.264  [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:03:18.528  [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:03:18.528  [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:03:18.787  [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:03:18.787  [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:03:19.353  [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:03:19.353  [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:03:19.611  [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:03:19.611  [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:03:19.611  [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:03:19.611  [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:03:19.611  [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:03:19.868  [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:03:20.127  [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:03:20.456  [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:03:20.456  [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:03:20.456  [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:03:20.761  [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:03:20.761  [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:03:21.049  [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:03:21.049  [85/268] Linking static target lib/librte_eal.a
00:03:21.313  [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:03:21.313  [87/268] Linking static target lib/librte_ring.a
00:03:21.593  [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:03:21.593  [89/268] Linking static target lib/librte_rcu.a
00:03:21.593  [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:03:21.593  [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:03:21.593  [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:03:21.861  [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:03:21.861  [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.120  [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:03:22.120  [96/268] Linking static target lib/librte_mempool.a
00:03:22.378  [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:03:22.378  [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.637  [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:03:22.637  [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:03:23.204  [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:03:23.204  [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:03:23.463  [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:03:23.463  [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:03:23.726  [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:03:23.726  [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:03:23.726  [107/268] Linking static target lib/librte_net.a
00:03:23.990  [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:03:23.990  [109/268] Linking static target lib/librte_meter.a
00:03:23.990  [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:03:23.990  [111/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:03:24.257  [112/268] Linking static target lib/librte_mbuf.a
00:03:24.832  [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:03:24.832  [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:03:24.832  [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:03:24.832  [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:03:25.090  [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:03:25.090  [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:03:26.024  [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:03:26.024  [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:26.282  [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:03:26.541  [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:03:27.476  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:03:27.476  [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:03:27.476  [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:03:27.735  [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:03:27.735  [127/268] Linking static target lib/librte_pci.a
00:03:27.735  [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:03:27.735  [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:03:27.994  [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:03:27.994  [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:03:27.994  [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:03:27.994  [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:03:28.253  [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:03:28.254  [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.254  [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:03:28.254  [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:03:28.254  [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:03:28.512  [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:03:28.512  [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:03:28.512  [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:03:28.512  [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:03:28.512  [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:03:28.771  [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:03:29.031  [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:03:29.289  [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:03:29.289  [147/268] Linking static target lib/librte_cmdline.a
00:03:29.289  [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:03:29.548  [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:03:30.114  [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:03:30.114  [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:03:30.114  [152/268] Linking static target lib/librte_timer.a
00:03:30.114  [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:03:30.114  [154/268] Linking static target lib/librte_ethdev.a
00:03:30.372  [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:03:30.640  [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:03:30.640  [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:03:31.215  [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:03:31.215  [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:03:31.215  [160/268] Linking static target lib/librte_hash.a
00:03:31.472  [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:03:31.472  [162/268] Linking static target lib/librte_compressdev.a
00:03:31.472  [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:03:31.472  [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:03:31.730  [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:03:31.988  [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:32.281  [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:03:32.281  [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:03:32.541  [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:03:32.541  [170/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:03:32.541  [171/268] Linking static target lib/librte_dmadev.a
00:03:33.104  [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:03:33.104  [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:03:33.104  [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.362  [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.642  [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:03:33.902  [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:03:33.902  [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:34.160  [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:03:34.160  [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:03:34.418  [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:03:34.677  [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:03:34.677  [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:34.677  [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:03:34.677  [185/268] Linking static target lib/librte_cryptodev.a
00:03:34.677  [186/268] Linking static target lib/librte_power.a
00:03:35.243  [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:03:35.243  [188/268] Linking static target lib/librte_reorder.a
00:03:35.513  [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:03:35.513  [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:03:35.777  [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:03:35.777  [192/268] Linking static target lib/librte_security.a
00:03:36.038  [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:03:36.297  [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.555  [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.817  [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.817  [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:03:37.389  [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:03:37.667  [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:03:38.241  [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:03:38.241  [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:03:38.241  [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:38.241  [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:03:38.506  [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:03:38.770  [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:03:39.029  [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:03:39.290  [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:39.549  [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:39.549  [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:39.549  [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:39.549  [211/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:39.808  [212/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:39.808  [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:40.067  [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:40.067  [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:40.067  [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:40.067  [217/268] Linking static target drivers/librte_bus_vdev.a
00:03:40.067  [218/268] Linking target lib/librte_eal.so.24.1
00:03:40.067  [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:40.067  [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:40.067  [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:40.067  [222/268] Linking static target drivers/librte_bus_pci.a
00:03:40.067  [223/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:40.325  [224/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:03:40.325  [225/268] Linking target lib/librte_ring.so.24.1
00:03:40.325  [226/268] Linking target lib/librte_timer.so.24.1
00:03:40.583  [227/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:40.583  [228/268] Linking target lib/librte_pci.so.24.1
00:03:40.583  [229/268] Linking target lib/librte_meter.so.24.1
00:03:40.583  [230/268] Linking target lib/librte_dmadev.so.24.1
00:03:40.583  [231/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:40.841  [232/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:03:40.841  [233/268] Linking target drivers/librte_bus_vdev.so.24.1
00:03:40.841  [234/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:03:40.841  [235/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:03:40.841  [236/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:40.841  [237/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:40.842  [238/268] Linking static target drivers/librte_mempool_ring.a
00:03:40.842  [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:03:40.842  [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:03:40.842  [241/268] Linking target lib/librte_mempool.so.24.1
00:03:40.842  [242/268] Linking target lib/librte_rcu.so.24.1
00:03:41.100  [243/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:41.100  [244/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:03:41.100  [245/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:03:41.100  [246/268] Linking target drivers/librte_bus_pci.so.24.1
00:03:41.100  [247/268] Linking target drivers/librte_mempool_ring.so.24.1
00:03:41.100  [248/268] Linking target lib/librte_mbuf.so.24.1
00:03:41.359  [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:03:41.618  [250/268] Linking target lib/librte_compressdev.so.24.1
00:03:41.618  [251/268] Linking target lib/librte_net.so.24.1
00:03:41.618  [252/268] Linking target lib/librte_cryptodev.so.24.1
00:03:41.618  [253/268] Linking target lib/librte_reorder.so.24.1
00:03:41.876  [254/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:03:41.876  [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:03:41.876  [256/268] Linking target lib/librte_cmdline.so.24.1
00:03:41.876  [257/268] Linking target lib/librte_security.so.24.1
00:03:41.876  [258/268] Linking target lib/librte_hash.so.24.1
00:03:42.135  [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:03:43.074  [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:43.074  [261/268] Linking target lib/librte_ethdev.so.24.1
00:03:43.333  [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:03:43.333  [263/268] Linking target lib/librte_power.so.24.1
00:03:43.591  [264/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:50.151  [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:50.151  [266/268] Linking static target lib/librte_vhost.a
00:03:51.091  [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:51.091  [268/268] Linking target lib/librte_vhost.so.24.1
00:03:51.091  INFO: autodetecting backend as ninja
00:03:51.091  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10
00:04:17.786    CC lib/log/log.o
00:04:17.786    CC lib/ut/ut.o
00:04:17.786    CC lib/log/log_flags.o
00:04:17.786    CC lib/log/log_deprecated.o
00:04:17.786    CC lib/ut_mock/mock.o
00:04:18.044    LIB libspdk_ut_mock.a
00:04:18.044    LIB libspdk_ut.a
00:04:18.045    SO libspdk_ut_mock.so.6.0
00:04:18.045    LIB libspdk_log.a
00:04:18.045    SO libspdk_ut.so.2.0
00:04:18.045    SO libspdk_log.so.7.1
00:04:18.045    SYMLINK libspdk_ut_mock.so
00:04:18.045    SYMLINK libspdk_ut.so
00:04:18.303    SYMLINK libspdk_log.so
00:04:18.303    CXX lib/trace_parser/trace.o
00:04:18.303    CC lib/util/base64.o
00:04:18.303    CC lib/util/bit_array.o
00:04:18.303    CC lib/util/cpuset.o
00:04:18.303    CC lib/util/crc32c.o
00:04:18.303    CC lib/util/crc32.o
00:04:18.303    CC lib/util/crc16.o
00:04:18.303    CC lib/ioat/ioat.o
00:04:18.303    CC lib/dma/dma.o
00:04:18.562    CC lib/vfio_user/host/vfio_user_pci.o
00:04:18.562    CC lib/vfio_user/host/vfio_user.o
00:04:18.562    CC lib/util/crc32_ieee.o
00:04:18.562    CC lib/util/crc64.o
00:04:18.562    CC lib/util/dif.o
00:04:18.562    CC lib/util/fd.o
00:04:18.562    LIB libspdk_dma.a
00:04:18.820    SO libspdk_dma.so.5.0
00:04:18.820    CC lib/util/fd_group.o
00:04:18.820    SYMLINK libspdk_dma.so
00:04:18.820    CC lib/util/file.o
00:04:18.820    CC lib/util/hexlify.o
00:04:18.820    CC lib/util/iov.o
00:04:18.820    CC lib/util/math.o
00:04:18.820    CC lib/util/net.o
00:04:18.820    LIB libspdk_vfio_user.a
00:04:18.820    LIB libspdk_ioat.a
00:04:18.820    SO libspdk_vfio_user.so.5.0
00:04:18.820    SO libspdk_ioat.so.7.0
00:04:19.080    CC lib/util/pipe.o
00:04:19.080    SYMLINK libspdk_ioat.so
00:04:19.080    SYMLINK libspdk_vfio_user.so
00:04:19.080    CC lib/util/strerror_tls.o
00:04:19.080    CC lib/util/string.o
00:04:19.080    CC lib/util/uuid.o
00:04:19.080    CC lib/util/xor.o
00:04:19.080    CC lib/util/zipf.o
00:04:19.080    CC lib/util/md5.o
00:04:20.014    LIB libspdk_util.a
00:04:20.014    SO libspdk_util.so.10.1
00:04:20.014    LIB libspdk_trace_parser.a
00:04:20.014    SO libspdk_trace_parser.so.6.0
00:04:20.014    SYMLINK libspdk_util.so
00:04:20.014    SYMLINK libspdk_trace_parser.so
00:04:20.272    CC lib/json/json_parse.o
00:04:20.272    CC lib/json/json_util.o
00:04:20.272    CC lib/vmd/vmd.o
00:04:20.272    CC lib/json/json_write.o
00:04:20.272    CC lib/vmd/led.o
00:04:20.272    CC lib/rdma_utils/rdma_utils.o
00:04:20.272    CC lib/idxd/idxd.o
00:04:20.272    CC lib/conf/conf.o
00:04:20.272    CC lib/idxd/idxd_user.o
00:04:20.273    CC lib/env_dpdk/env.o
00:04:20.530    CC lib/env_dpdk/memory.o
00:04:20.530    CC lib/env_dpdk/pci.o
00:04:20.530    CC lib/idxd/idxd_kernel.o
00:04:20.530    LIB libspdk_rdma_utils.a
00:04:20.530    SO libspdk_rdma_utils.so.1.0
00:04:20.788    LIB libspdk_json.a
00:04:20.789    CC lib/env_dpdk/init.o
00:04:20.789    SO libspdk_json.so.6.0
00:04:20.789    SYMLINK libspdk_rdma_utils.so
00:04:20.789    LIB libspdk_conf.a
00:04:20.789    CC lib/env_dpdk/threads.o
00:04:20.789    SO libspdk_conf.so.6.0
00:04:20.789    SYMLINK libspdk_json.so
00:04:20.789    SYMLINK libspdk_conf.so
00:04:20.789    CC lib/env_dpdk/pci_ioat.o
00:04:21.046    CC lib/env_dpdk/pci_virtio.o
00:04:21.046    CC lib/rdma_provider/common.o
00:04:21.046    CC lib/jsonrpc/jsonrpc_server.o
00:04:21.046    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:04:21.046    CC lib/jsonrpc/jsonrpc_client.o
00:04:21.046    LIB libspdk_idxd.a
00:04:21.046    SO libspdk_idxd.so.12.1
00:04:21.304    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:04:21.304    SYMLINK libspdk_idxd.so
00:04:21.304    CC lib/env_dpdk/pci_vmd.o
00:04:21.304    CC lib/rdma_provider/rdma_provider_verbs.o
00:04:21.304    CC lib/env_dpdk/pci_idxd.o
00:04:21.304    CC lib/env_dpdk/pci_event.o
00:04:21.304    CC lib/env_dpdk/sigbus_handler.o
00:04:21.304    LIB libspdk_vmd.a
00:04:21.304    SO libspdk_vmd.so.6.0
00:04:21.304    CC lib/env_dpdk/pci_dpdk.o
00:04:21.304    CC lib/env_dpdk/pci_dpdk_2207.o
00:04:21.304    CC lib/env_dpdk/pci_dpdk_2211.o
00:04:21.304    SYMLINK libspdk_vmd.so
00:04:21.561    LIB libspdk_jsonrpc.a
00:04:21.561    SO libspdk_jsonrpc.so.6.0
00:04:21.561    LIB libspdk_rdma_provider.a
00:04:21.561    SO libspdk_rdma_provider.so.7.0
00:04:21.561    SYMLINK libspdk_jsonrpc.so
00:04:21.561    SYMLINK libspdk_rdma_provider.so
00:04:21.819    CC lib/rpc/rpc.o
00:04:22.077    LIB libspdk_rpc.a
00:04:22.077    SO libspdk_rpc.so.6.0
00:04:22.077    SYMLINK libspdk_rpc.so
00:04:22.335    CC lib/notify/notify.o
00:04:22.335    CC lib/notify/notify_rpc.o
00:04:22.335    CC lib/trace/trace_flags.o
00:04:22.335    CC lib/trace/trace.o
00:04:22.335    CC lib/trace/trace_rpc.o
00:04:22.335    CC lib/keyring/keyring.o
00:04:22.335    CC lib/keyring/keyring_rpc.o
00:04:22.335    LIB libspdk_env_dpdk.a
00:04:22.593    SO libspdk_env_dpdk.so.15.1
00:04:22.593    LIB libspdk_notify.a
00:04:22.593    SO libspdk_notify.so.6.0
00:04:22.851    SYMLINK libspdk_notify.so
00:04:22.851    SYMLINK libspdk_env_dpdk.so
00:04:22.851    LIB libspdk_keyring.a
00:04:22.851    LIB libspdk_trace.a
00:04:22.851    SO libspdk_keyring.so.2.0
00:04:22.851    SO libspdk_trace.so.11.0
00:04:22.851    SYMLINK libspdk_keyring.so
00:04:22.851    SYMLINK libspdk_trace.so
00:04:23.110    CC lib/thread/thread.o
00:04:23.110    CC lib/thread/iobuf.o
00:04:23.110    CC lib/sock/sock_rpc.o
00:04:23.110    CC lib/sock/sock.o
00:04:23.676    LIB libspdk_sock.a
00:04:23.935    SO libspdk_sock.so.10.0
00:04:23.935    SYMLINK libspdk_sock.so
00:04:24.193    CC lib/nvme/nvme_ctrlr_cmd.o
00:04:24.193    CC lib/nvme/nvme_ctrlr.o
00:04:24.193    CC lib/nvme/nvme_fabric.o
00:04:24.193    CC lib/nvme/nvme_ns.o
00:04:24.193    CC lib/nvme/nvme_ns_cmd.o
00:04:24.193    CC lib/nvme/nvme_pcie_common.o
00:04:24.193    CC lib/nvme/nvme_pcie.o
00:04:24.193    CC lib/nvme/nvme_qpair.o
00:04:24.193    CC lib/nvme/nvme.o
00:04:25.565    CC lib/nvme/nvme_quirks.o
00:04:25.565    CC lib/nvme/nvme_transport.o
00:04:25.565    CC lib/nvme/nvme_discovery.o
00:04:25.565    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:04:25.824    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:04:25.824    CC lib/nvme/nvme_tcp.o
00:04:25.824    LIB libspdk_thread.a
00:04:26.082    SO libspdk_thread.so.11.0
00:04:26.082    CC lib/nvme/nvme_opal.o
00:04:26.082    SYMLINK libspdk_thread.so
00:04:26.082    CC lib/nvme/nvme_io_msg.o
00:04:26.376    CC lib/nvme/nvme_poll_group.o
00:04:26.376    CC lib/nvme/nvme_zns.o
00:04:26.638    CC lib/nvme/nvme_stubs.o
00:04:26.897    CC lib/accel/accel.o
00:04:26.897    CC lib/blob/blobstore.o
00:04:27.155    CC lib/nvme/nvme_auth.o
00:04:27.155    CC lib/blob/request.o
00:04:27.413    CC lib/blob/zeroes.o
00:04:27.413    CC lib/blob/blob_bs_dev.o
00:04:27.979    CC lib/init/json_config.o
00:04:27.979    CC lib/virtio/virtio.o
00:04:27.979    CC lib/virtio/virtio_vhost_user.o
00:04:27.979    CC lib/nvme/nvme_cuse.o
00:04:27.979    CC lib/fsdev/fsdev.o
00:04:28.237    CC lib/init/subsystem.o
00:04:28.495    CC lib/fsdev/fsdev_io.o
00:04:28.495    CC lib/fsdev/fsdev_rpc.o
00:04:28.495    CC lib/virtio/virtio_vfio_user.o
00:04:28.754    CC lib/init/subsystem_rpc.o
00:04:28.754    CC lib/virtio/virtio_pci.o
00:04:28.754    CC lib/accel/accel_rpc.o
00:04:28.754    CC lib/nvme/nvme_rdma.o
00:04:29.012    CC lib/init/rpc.o
00:04:29.012    CC lib/accel/accel_sw.o
00:04:29.012    LIB libspdk_init.a
00:04:29.012    LIB libspdk_virtio.a
00:04:29.269    SO libspdk_virtio.so.7.0
00:04:29.269    SO libspdk_init.so.6.0
00:04:29.269    SYMLINK libspdk_init.so
00:04:29.269    SYMLINK libspdk_virtio.so
00:04:29.269    LIB libspdk_fsdev.a
00:04:29.269    SO libspdk_fsdev.so.2.0
00:04:29.527    SYMLINK libspdk_fsdev.so
00:04:29.527    LIB libspdk_accel.a
00:04:29.527    CC lib/event/app.o
00:04:29.527    CC lib/event/log_rpc.o
00:04:29.527    CC lib/event/reactor.o
00:04:29.527    CC lib/event/scheduler_static.o
00:04:29.527    CC lib/event/app_rpc.o
00:04:29.527    SO libspdk_accel.so.16.0
00:04:29.527    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:04:29.527    SYMLINK libspdk_accel.so
00:04:29.784    CC lib/bdev/bdev_rpc.o
00:04:29.784    CC lib/bdev/bdev.o
00:04:29.784    CC lib/bdev/bdev_zone.o
00:04:29.784    CC lib/bdev/part.o
00:04:30.043    CC lib/bdev/scsi_nvme.o
00:04:30.301    LIB libspdk_event.a
00:04:30.301    SO libspdk_event.so.14.0
00:04:30.301    SYMLINK libspdk_event.so
00:04:30.864    LIB libspdk_fuse_dispatcher.a
00:04:30.864    SO libspdk_fuse_dispatcher.so.1.0
00:04:30.864    LIB libspdk_nvme.a
00:04:30.864    SYMLINK libspdk_fuse_dispatcher.so
00:04:31.122    SO libspdk_nvme.so.15.0
00:04:31.688    SYMLINK libspdk_nvme.so
00:04:32.254    LIB libspdk_blob.a
00:04:32.512    SO libspdk_blob.so.11.0
00:04:32.512    SYMLINK libspdk_blob.so
00:04:32.771    CC lib/blobfs/blobfs.o
00:04:32.771    CC lib/blobfs/tree.o
00:04:32.771    CC lib/lvol/lvol.o
00:04:34.678    LIB libspdk_blobfs.a
00:04:34.678    SO libspdk_blobfs.so.10.0
00:04:34.678    SYMLINK libspdk_blobfs.so
00:04:34.678    LIB libspdk_lvol.a
00:04:34.678    LIB libspdk_bdev.a
00:04:34.678    SO libspdk_lvol.so.10.0
00:04:34.678    SO libspdk_bdev.so.17.0
00:04:34.678    SYMLINK libspdk_lvol.so
00:04:34.936    SYMLINK libspdk_bdev.so
00:04:35.194    CC lib/scsi/dev.o
00:04:35.194    CC lib/scsi/lun.o
00:04:35.194    CC lib/scsi/port.o
00:04:35.194    CC lib/scsi/scsi.o
00:04:35.194    CC lib/nvmf/ctrlr.o
00:04:35.194    CC lib/scsi/scsi_bdev.o
00:04:35.194    CC lib/scsi/scsi_pr.o
00:04:35.194    CC lib/ublk/ublk.o
00:04:35.194    CC lib/nbd/nbd.o
00:04:35.194    CC lib/ftl/ftl_core.o
00:04:35.452    CC lib/ftl/ftl_init.o
00:04:35.452    CC lib/ftl/ftl_layout.o
00:04:35.452    CC lib/nvmf/ctrlr_discovery.o
00:04:35.711    CC lib/ftl/ftl_debug.o
00:04:35.711    CC lib/ftl/ftl_io.o
00:04:35.970    CC lib/ftl/ftl_sb.o
00:04:35.970    CC lib/ublk/ublk_rpc.o
00:04:35.970    CC lib/nbd/nbd_rpc.o
00:04:35.970    CC lib/ftl/ftl_l2p.o
00:04:35.970    CC lib/nvmf/ctrlr_bdev.o
00:04:36.228    LIB libspdk_nbd.a
00:04:36.228    CC lib/scsi/scsi_rpc.o
00:04:36.228    SO libspdk_nbd.so.7.0
00:04:36.228    CC lib/ftl/ftl_l2p_flat.o
00:04:36.228    CC lib/ftl/ftl_nv_cache.o
00:04:36.228    CC lib/nvmf/subsystem.o
00:04:36.228    CC lib/ftl/ftl_band.o
00:04:36.228    SYMLINK libspdk_nbd.so
00:04:36.486    CC lib/nvmf/nvmf.o
00:04:36.486    CC lib/scsi/task.o
00:04:36.486    LIB libspdk_ublk.a
00:04:36.745    SO libspdk_ublk.so.3.0
00:04:36.745    CC lib/ftl/ftl_band_ops.o
00:04:36.745    CC lib/ftl/ftl_writer.o
00:04:36.745    LIB libspdk_scsi.a
00:04:36.745    SYMLINK libspdk_ublk.so
00:04:36.745    CC lib/ftl/ftl_rq.o
00:04:36.745    SO libspdk_scsi.so.9.0
00:04:37.004    SYMLINK libspdk_scsi.so
00:04:37.004    CC lib/ftl/ftl_reloc.o
00:04:37.262    CC lib/ftl/ftl_l2p_cache.o
00:04:37.262    CC lib/nvmf/nvmf_rpc.o
00:04:37.262    CC lib/nvmf/transport.o
00:04:37.521    CC lib/iscsi/conn.o
00:04:37.521    CC lib/vhost/vhost.o
00:04:37.780    CC lib/vhost/vhost_rpc.o
00:04:38.039    CC lib/vhost/vhost_scsi.o
00:04:38.297    CC lib/ftl/ftl_p2l.o
00:04:38.297    CC lib/iscsi/init_grp.o
00:04:38.555    CC lib/ftl/ftl_p2l_log.o
00:04:38.555    CC lib/ftl/mngt/ftl_mngt.o
00:04:38.813    CC lib/nvmf/tcp.o
00:04:38.813    CC lib/vhost/vhost_blk.o
00:04:38.813    CC lib/iscsi/iscsi.o
00:04:38.813    CC lib/vhost/rte_vhost_user.o
00:04:39.071    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:04:39.071    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:04:39.071    CC lib/iscsi/param.o
00:04:39.329    CC lib/iscsi/portal_grp.o
00:04:39.329    CC lib/iscsi/tgt_node.o
00:04:39.329    CC lib/iscsi/iscsi_subsystem.o
00:04:39.329    CC lib/ftl/mngt/ftl_mngt_startup.o
00:04:39.619    CC lib/iscsi/iscsi_rpc.o
00:04:39.877    CC lib/ftl/mngt/ftl_mngt_md.o
00:04:39.877    CC lib/iscsi/task.o
00:04:39.877    CC lib/nvmf/stubs.o
00:04:40.135    CC lib/ftl/mngt/ftl_mngt_misc.o
00:04:40.135    CC lib/nvmf/mdns_server.o
00:04:40.135    CC lib/nvmf/rdma.o
00:04:40.135    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:04:40.393    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:04:40.393    CC lib/ftl/mngt/ftl_mngt_band.o
00:04:40.393    CC lib/nvmf/auth.o
00:04:40.652    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:04:40.652    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:04:40.910    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:04:40.910    LIB libspdk_vhost.a
00:04:40.910    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:04:40.910    CC lib/ftl/utils/ftl_conf.o
00:04:40.910    SO libspdk_vhost.so.8.0
00:04:40.910    CC lib/ftl/utils/ftl_md.o
00:04:41.169    CC lib/ftl/utils/ftl_mempool.o
00:04:41.169    SYMLINK libspdk_vhost.so
00:04:41.169    CC lib/ftl/utils/ftl_bitmap.o
00:04:41.169    CC lib/ftl/utils/ftl_property.o
00:04:41.429    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:04:41.429    LIB libspdk_iscsi.a
00:04:41.429    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:04:41.429    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:04:41.429    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:04:41.688    SO libspdk_iscsi.so.8.0
00:04:41.688    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:04:41.688    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:04:41.947    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:04:41.947    CC lib/ftl/upgrade/ftl_sb_v3.o
00:04:41.947    CC lib/ftl/upgrade/ftl_sb_v5.o
00:04:41.947    SYMLINK libspdk_iscsi.so
00:04:41.947    CC lib/ftl/nvc/ftl_nvc_dev.o
00:04:41.947    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:04:42.205    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:04:42.205    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:04:42.205    CC lib/ftl/base/ftl_base_dev.o
00:04:42.205    CC lib/ftl/base/ftl_base_bdev.o
00:04:42.205    CC lib/ftl/ftl_trace.o
00:04:42.772    LIB libspdk_ftl.a
00:04:43.031    SO libspdk_ftl.so.9.0
00:04:43.290    SYMLINK libspdk_ftl.so
00:04:43.857    LIB libspdk_nvmf.a
00:04:44.114    SO libspdk_nvmf.so.20.0
00:04:44.373    SYMLINK libspdk_nvmf.so
00:04:44.940    CC module/env_dpdk/env_dpdk_rpc.o
00:04:44.940    CC module/accel/dsa/accel_dsa.o
00:04:44.940    CC module/accel/error/accel_error.o
00:04:44.940    CC module/accel/iaa/accel_iaa.o
00:04:44.940    CC module/keyring/file/keyring.o
00:04:44.940    CC module/blob/bdev/blob_bdev.o
00:04:44.940    CC module/fsdev/aio/fsdev_aio.o
00:04:44.940    CC module/accel/ioat/accel_ioat.o
00:04:44.940    CC module/scheduler/dynamic/scheduler_dynamic.o
00:04:44.940    CC module/sock/posix/posix.o
00:04:44.940    LIB libspdk_env_dpdk_rpc.a
00:04:44.940    SO libspdk_env_dpdk_rpc.so.6.0
00:04:45.198    SYMLINK libspdk_env_dpdk_rpc.so
00:04:45.198    CC module/accel/ioat/accel_ioat_rpc.o
00:04:45.198    CC module/keyring/file/keyring_rpc.o
00:04:45.198    CC module/accel/iaa/accel_iaa_rpc.o
00:04:45.198    CC module/fsdev/aio/fsdev_aio_rpc.o
00:04:45.456    LIB libspdk_scheduler_dynamic.a
00:04:45.456    CC module/accel/error/accel_error_rpc.o
00:04:45.456    SO libspdk_scheduler_dynamic.so.4.0
00:04:45.456    LIB libspdk_blob_bdev.a
00:04:45.456    LIB libspdk_accel_ioat.a
00:04:45.456    LIB libspdk_keyring_file.a
00:04:45.456    CC module/accel/dsa/accel_dsa_rpc.o
00:04:45.456    SO libspdk_accel_ioat.so.6.0
00:04:45.456    SO libspdk_blob_bdev.so.11.0
00:04:45.456    SYMLINK libspdk_scheduler_dynamic.so
00:04:45.456    LIB libspdk_accel_iaa.a
00:04:45.456    SO libspdk_keyring_file.so.2.0
00:04:45.457    SO libspdk_accel_iaa.so.3.0
00:04:45.457    SYMLINK libspdk_blob_bdev.so
00:04:45.457    SYMLINK libspdk_accel_ioat.so
00:04:45.457    LIB libspdk_accel_error.a
00:04:45.457    CC module/fsdev/aio/linux_aio_mgr.o
00:04:45.457    SO libspdk_accel_error.so.2.0
00:04:45.714    SYMLINK libspdk_keyring_file.so
00:04:45.714    LIB libspdk_accel_dsa.a
00:04:45.714    SYMLINK libspdk_accel_iaa.so
00:04:45.714    SO libspdk_accel_dsa.so.5.0
00:04:45.714    SYMLINK libspdk_accel_error.so
00:04:45.714    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:04:45.715    SYMLINK libspdk_accel_dsa.so
00:04:45.715    CC module/scheduler/gscheduler/gscheduler.o
00:04:45.978    CC module/keyring/linux/keyring.o
00:04:45.978    CC module/bdev/error/vbdev_error.o
00:04:45.978    LIB libspdk_scheduler_dpdk_governor.a
00:04:45.978    CC module/bdev/delay/vbdev_delay.o
00:04:45.978    CC module/bdev/gpt/gpt.o
00:04:45.978    LIB libspdk_scheduler_gscheduler.a
00:04:45.978    CC module/blobfs/bdev/blobfs_bdev.o
00:04:45.978    CC module/bdev/lvol/vbdev_lvol.o
00:04:45.978    SO libspdk_scheduler_dpdk_governor.so.4.0
00:04:46.236    CC module/keyring/linux/keyring_rpc.o
00:04:46.236    SO libspdk_scheduler_gscheduler.so.4.0
00:04:46.236    LIB libspdk_fsdev_aio.a
00:04:46.236    SYMLINK libspdk_scheduler_dpdk_governor.so
00:04:46.236    CC module/bdev/gpt/vbdev_gpt.o
00:04:46.236    SYMLINK libspdk_scheduler_gscheduler.so
00:04:46.236    CC module/bdev/delay/vbdev_delay_rpc.o
00:04:46.236    SO libspdk_fsdev_aio.so.1.0
00:04:46.236    LIB libspdk_keyring_linux.a
00:04:46.236    SO libspdk_keyring_linux.so.1.0
00:04:46.236    SYMLINK libspdk_fsdev_aio.so
00:04:46.494    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:04:46.494    SYMLINK libspdk_keyring_linux.so
00:04:46.494    CC module/bdev/error/vbdev_error_rpc.o
00:04:46.494    LIB libspdk_sock_posix.a
00:04:46.494    CC module/bdev/malloc/bdev_malloc.o
00:04:46.494    CC module/bdev/malloc/bdev_malloc_rpc.o
00:04:46.494    CC module/bdev/null/bdev_null.o
00:04:46.494    LIB libspdk_bdev_delay.a
00:04:46.494    SO libspdk_sock_posix.so.6.0
00:04:46.752    SO libspdk_bdev_delay.so.6.0
00:04:46.752    LIB libspdk_blobfs_bdev.a
00:04:46.752    SO libspdk_blobfs_bdev.so.6.0
00:04:46.752    LIB libspdk_bdev_error.a
00:04:46.752    SYMLINK libspdk_sock_posix.so
00:04:46.752    SYMLINK libspdk_bdev_delay.so
00:04:46.752    SO libspdk_bdev_error.so.6.0
00:04:46.752    CC module/bdev/nvme/bdev_nvme.o
00:04:46.752    SYMLINK libspdk_blobfs_bdev.so
00:04:46.752    LIB libspdk_bdev_gpt.a
00:04:46.752    CC module/bdev/nvme/bdev_nvme_rpc.o
00:04:46.752    SYMLINK libspdk_bdev_error.so
00:04:46.752    SO libspdk_bdev_gpt.so.6.0
00:04:47.011    CC module/bdev/nvme/nvme_rpc.o
00:04:47.011    SYMLINK libspdk_bdev_gpt.so
00:04:47.011    CC module/bdev/passthru/vbdev_passthru.o
00:04:47.011    CC module/bdev/null/bdev_null_rpc.o
00:04:47.011    CC module/bdev/raid/bdev_raid.o
00:04:47.011    CC module/bdev/split/vbdev_split.o
00:04:47.011    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:04:47.269    LIB libspdk_bdev_malloc.a
00:04:47.269    CC module/bdev/zone_block/vbdev_zone_block.o
00:04:47.269    LIB libspdk_bdev_null.a
00:04:47.269    CC module/bdev/split/vbdev_split_rpc.o
00:04:47.269    SO libspdk_bdev_malloc.so.6.0
00:04:47.269    SO libspdk_bdev_null.so.6.0
00:04:47.269    CC module/bdev/raid/bdev_raid_rpc.o
00:04:47.269    SYMLINK libspdk_bdev_null.so
00:04:47.269    SYMLINK libspdk_bdev_malloc.so
00:04:47.269    LIB libspdk_bdev_split.a
00:04:47.528    SO libspdk_bdev_split.so.6.0
00:04:47.528    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:04:47.528    CC module/bdev/xnvme/bdev_xnvme.o
00:04:47.528    CC module/bdev/aio/bdev_aio.o
00:04:47.528    SYMLINK libspdk_bdev_split.so
00:04:47.528    CC module/bdev/aio/bdev_aio_rpc.o
00:04:47.528    LIB libspdk_bdev_lvol.a
00:04:47.528    CC module/bdev/raid/bdev_raid_sb.o
00:04:47.528    SO libspdk_bdev_lvol.so.6.0
00:04:47.787    LIB libspdk_bdev_passthru.a
00:04:47.787    SYMLINK libspdk_bdev_lvol.so
00:04:47.787    CC module/bdev/xnvme/bdev_xnvme_rpc.o
00:04:47.787    SO libspdk_bdev_passthru.so.6.0
00:04:47.787    CC module/bdev/nvme/bdev_mdns_client.o
00:04:47.787    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:04:47.787    SYMLINK libspdk_bdev_passthru.so
00:04:48.046    CC module/bdev/raid/raid0.o
00:04:48.046    CC module/bdev/raid/raid1.o
00:04:48.046    LIB libspdk_bdev_xnvme.a
00:04:48.046    CC module/bdev/ftl/bdev_ftl.o
00:04:48.046    CC module/bdev/ftl/bdev_ftl_rpc.o
00:04:48.046    SO libspdk_bdev_xnvme.so.3.0
00:04:48.305    LIB libspdk_bdev_zone_block.a
00:04:48.305    LIB libspdk_bdev_aio.a
00:04:48.305    CC module/bdev/iscsi/bdev_iscsi.o
00:04:48.305    SO libspdk_bdev_aio.so.6.0
00:04:48.305    SO libspdk_bdev_zone_block.so.6.0
00:04:48.305    SYMLINK libspdk_bdev_xnvme.so
00:04:48.305    SYMLINK libspdk_bdev_zone_block.so
00:04:48.305    CC module/bdev/raid/concat.o
00:04:48.305    SYMLINK libspdk_bdev_aio.so
00:04:48.564    CC module/bdev/nvme/vbdev_opal.o
00:04:48.564    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:04:48.564    CC module/bdev/nvme/vbdev_opal_rpc.o
00:04:48.564    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:04:48.564    CC module/bdev/virtio/bdev_virtio_scsi.o
00:04:48.823    LIB libspdk_bdev_ftl.a
00:04:48.823    SO libspdk_bdev_ftl.so.6.0
00:04:48.823    SYMLINK libspdk_bdev_ftl.so
00:04:48.823    CC module/bdev/virtio/bdev_virtio_blk.o
00:04:48.823    CC module/bdev/virtio/bdev_virtio_rpc.o
00:04:49.081    LIB libspdk_bdev_iscsi.a
00:04:49.081    SO libspdk_bdev_iscsi.so.6.0
00:04:49.081    LIB libspdk_bdev_raid.a
00:04:49.081    SYMLINK libspdk_bdev_iscsi.so
00:04:49.081    SO libspdk_bdev_raid.so.6.0
00:04:49.339    SYMLINK libspdk_bdev_raid.so
00:04:49.598    LIB libspdk_bdev_virtio.a
00:04:49.856    SO libspdk_bdev_virtio.so.6.0
00:04:49.856    SYMLINK libspdk_bdev_virtio.so
00:04:51.763    LIB libspdk_bdev_nvme.a
00:04:51.763    SO libspdk_bdev_nvme.so.7.1
00:04:51.763    SYMLINK libspdk_bdev_nvme.so
00:04:52.331    CC module/event/subsystems/keyring/keyring.o
00:04:52.331    CC module/event/subsystems/iobuf/iobuf.o
00:04:52.331    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:04:52.331    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:04:52.590    CC module/event/subsystems/vmd/vmd.o
00:04:52.590    CC module/event/subsystems/vmd/vmd_rpc.o
00:04:52.590    CC module/event/subsystems/scheduler/scheduler.o
00:04:52.590    CC module/event/subsystems/fsdev/fsdev.o
00:04:52.590    CC module/event/subsystems/sock/sock.o
00:04:52.590    LIB libspdk_event_keyring.a
00:04:52.849    LIB libspdk_event_vhost_blk.a
00:04:52.849    SO libspdk_event_keyring.so.1.0
00:04:52.849    LIB libspdk_event_fsdev.a
00:04:52.849    LIB libspdk_event_vmd.a
00:04:52.849    LIB libspdk_event_scheduler.a
00:04:52.849    LIB libspdk_event_iobuf.a
00:04:52.849    LIB libspdk_event_sock.a
00:04:52.849    SO libspdk_event_vhost_blk.so.3.0
00:04:52.849    SO libspdk_event_fsdev.so.1.0
00:04:52.849    SO libspdk_event_scheduler.so.4.0
00:04:52.849    SO libspdk_event_vmd.so.6.0
00:04:52.849    SO libspdk_event_iobuf.so.3.0
00:04:52.849    SO libspdk_event_sock.so.5.0
00:04:52.849    SYMLINK libspdk_event_keyring.so
00:04:52.849    SYMLINK libspdk_event_fsdev.so
00:04:52.849    SYMLINK libspdk_event_vhost_blk.so
00:04:52.849    SYMLINK libspdk_event_scheduler.so
00:04:52.849    SYMLINK libspdk_event_iobuf.so
00:04:52.849    SYMLINK libspdk_event_sock.so
00:04:52.849    SYMLINK libspdk_event_vmd.so
00:04:53.107    CC module/event/subsystems/accel/accel.o
00:04:53.365    LIB libspdk_event_accel.a
00:04:53.365    SO libspdk_event_accel.so.6.0
00:04:53.624    SYMLINK libspdk_event_accel.so
00:04:53.624    CC module/event/subsystems/bdev/bdev.o
00:04:54.191    LIB libspdk_event_bdev.a
00:04:54.191    SO libspdk_event_bdev.so.6.0
00:04:54.191    SYMLINK libspdk_event_bdev.so
00:04:54.450    CC module/event/subsystems/scsi/scsi.o
00:04:54.450    CC module/event/subsystems/ublk/ublk.o
00:04:54.450    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:04:54.450    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:04:54.450    CC module/event/subsystems/nbd/nbd.o
00:04:54.709    LIB libspdk_event_nbd.a
00:04:54.709    SO libspdk_event_nbd.so.6.0
00:04:54.709    LIB libspdk_event_ublk.a
00:04:54.709    LIB libspdk_event_scsi.a
00:04:54.709    SO libspdk_event_ublk.so.3.0
00:04:54.709    SO libspdk_event_scsi.so.6.0
00:04:54.709    SYMLINK libspdk_event_nbd.so
00:04:54.709    SYMLINK libspdk_event_ublk.so
00:04:54.709    SYMLINK libspdk_event_scsi.so
00:04:54.709    LIB libspdk_event_nvmf.a
00:04:54.967    SO libspdk_event_nvmf.so.6.0
00:04:54.967    SYMLINK libspdk_event_nvmf.so
00:04:54.967    CC module/event/subsystems/iscsi/iscsi.o
00:04:54.967    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:04:55.225    LIB libspdk_event_vhost_scsi.a
00:04:55.225    SO libspdk_event_vhost_scsi.so.3.0
00:04:55.225    LIB libspdk_event_iscsi.a
00:04:55.483    SO libspdk_event_iscsi.so.6.0
00:04:55.483    SYMLINK libspdk_event_vhost_scsi.so
00:04:55.483    SYMLINK libspdk_event_iscsi.so
00:04:55.483    SO libspdk.so.6.0
00:04:55.483    SYMLINK libspdk.so
00:04:55.742    CXX app/trace/trace.o
00:04:55.742    CC app/spdk_lspci/spdk_lspci.o
00:04:55.742    CC app/trace_record/trace_record.o
00:04:55.742    CC app/spdk_nvme_perf/perf.o
00:04:56.001    CC app/nvmf_tgt/nvmf_main.o
00:04:56.001    CC app/iscsi_tgt/iscsi_tgt.o
00:04:56.001    CC test/thread/poller_perf/poller_perf.o
00:04:56.001    CC examples/ioat/perf/perf.o
00:04:56.001    CC app/spdk_tgt/spdk_tgt.o
00:04:56.001    CC examples/util/zipf/zipf.o
00:04:56.001    LINK spdk_lspci
00:04:56.259    LINK nvmf_tgt
00:04:56.259    LINK poller_perf
00:04:56.259    LINK zipf
00:04:56.259    LINK iscsi_tgt
00:04:56.518    LINK spdk_tgt
00:04:56.518    LINK spdk_trace_record
00:04:56.518    LINK ioat_perf
00:04:56.776    LINK spdk_trace
00:04:56.776    CC app/spdk_nvme_identify/identify.o
00:04:56.776    CC app/spdk_nvme_discover/discovery_aer.o
00:04:56.776    CC examples/interrupt_tgt/interrupt_tgt.o
00:04:57.035    CC app/spdk_top/spdk_top.o
00:04:57.035    CC examples/ioat/verify/verify.o
00:04:57.035    TEST_HEADER include/spdk/accel.h
00:04:57.035    TEST_HEADER include/spdk/accel_module.h
00:04:57.035    TEST_HEADER include/spdk/assert.h
00:04:57.035    TEST_HEADER include/spdk/barrier.h
00:04:57.035    TEST_HEADER include/spdk/base64.h
00:04:57.035    TEST_HEADER include/spdk/bdev.h
00:04:57.035    TEST_HEADER include/spdk/bdev_module.h
00:04:57.035    TEST_HEADER include/spdk/bdev_zone.h
00:04:57.035    TEST_HEADER include/spdk/bit_array.h
00:04:57.035    TEST_HEADER include/spdk/bit_pool.h
00:04:57.035    TEST_HEADER include/spdk/blob_bdev.h
00:04:57.035    TEST_HEADER include/spdk/blobfs_bdev.h
00:04:57.035    TEST_HEADER include/spdk/blobfs.h
00:04:57.035    TEST_HEADER include/spdk/blob.h
00:04:57.035    TEST_HEADER include/spdk/conf.h
00:04:57.035    TEST_HEADER include/spdk/config.h
00:04:57.035    CC test/dma/test_dma/test_dma.o
00:04:57.035    TEST_HEADER include/spdk/cpuset.h
00:04:57.035    TEST_HEADER include/spdk/crc16.h
00:04:57.035    TEST_HEADER include/spdk/crc32.h
00:04:57.035    TEST_HEADER include/spdk/crc64.h
00:04:57.035    TEST_HEADER include/spdk/dif.h
00:04:57.035    TEST_HEADER include/spdk/dma.h
00:04:57.035    TEST_HEADER include/spdk/endian.h
00:04:57.035    TEST_HEADER include/spdk/env_dpdk.h
00:04:57.035    TEST_HEADER include/spdk/env.h
00:04:57.035    TEST_HEADER include/spdk/event.h
00:04:57.035    TEST_HEADER include/spdk/fd_group.h
00:04:57.035    TEST_HEADER include/spdk/fd.h
00:04:57.035    TEST_HEADER include/spdk/file.h
00:04:57.035    TEST_HEADER include/spdk/fsdev.h
00:04:57.035    TEST_HEADER include/spdk/fsdev_module.h
00:04:57.035    TEST_HEADER include/spdk/ftl.h
00:04:57.035    TEST_HEADER include/spdk/fuse_dispatcher.h
00:04:57.035    TEST_HEADER include/spdk/gpt_spec.h
00:04:57.035    TEST_HEADER include/spdk/hexlify.h
00:04:57.035    TEST_HEADER include/spdk/histogram_data.h
00:04:57.036    TEST_HEADER include/spdk/idxd.h
00:04:57.036    TEST_HEADER include/spdk/idxd_spec.h
00:04:57.036    TEST_HEADER include/spdk/init.h
00:04:57.036    TEST_HEADER include/spdk/ioat.h
00:04:57.036    TEST_HEADER include/spdk/ioat_spec.h
00:04:57.036    TEST_HEADER include/spdk/iscsi_spec.h
00:04:57.036    TEST_HEADER include/spdk/json.h
00:04:57.036    TEST_HEADER include/spdk/jsonrpc.h
00:04:57.295    TEST_HEADER include/spdk/keyring.h
00:04:57.295    TEST_HEADER include/spdk/keyring_module.h
00:04:57.295    TEST_HEADER include/spdk/likely.h
00:04:57.295    TEST_HEADER include/spdk/log.h
00:04:57.295    TEST_HEADER include/spdk/lvol.h
00:04:57.295    TEST_HEADER include/spdk/md5.h
00:04:57.295    TEST_HEADER include/spdk/memory.h
00:04:57.295    CC test/app/bdev_svc/bdev_svc.o
00:04:57.295    TEST_HEADER include/spdk/mmio.h
00:04:57.295    TEST_HEADER include/spdk/nbd.h
00:04:57.295    TEST_HEADER include/spdk/net.h
00:04:57.295    TEST_HEADER include/spdk/notify.h
00:04:57.295    TEST_HEADER include/spdk/nvme.h
00:04:57.295    TEST_HEADER include/spdk/nvme_intel.h
00:04:57.295    TEST_HEADER include/spdk/nvme_ocssd.h
00:04:57.295    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:04:57.295    LINK spdk_nvme_discover
00:04:57.295    TEST_HEADER include/spdk/nvme_spec.h
00:04:57.295    TEST_HEADER include/spdk/nvme_zns.h
00:04:57.295    TEST_HEADER include/spdk/nvmf_cmd.h
00:04:57.295    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:04:57.295    TEST_HEADER include/spdk/nvmf.h
00:04:57.295    TEST_HEADER include/spdk/nvmf_spec.h
00:04:57.295    TEST_HEADER include/spdk/nvmf_transport.h
00:04:57.295    TEST_HEADER include/spdk/opal.h
00:04:57.295    TEST_HEADER include/spdk/opal_spec.h
00:04:57.295    TEST_HEADER include/spdk/pci_ids.h
00:04:57.295    TEST_HEADER include/spdk/pipe.h
00:04:57.295    TEST_HEADER include/spdk/queue.h
00:04:57.295    TEST_HEADER include/spdk/reduce.h
00:04:57.295    TEST_HEADER include/spdk/rpc.h
00:04:57.295    TEST_HEADER include/spdk/scheduler.h
00:04:57.295    TEST_HEADER include/spdk/scsi.h
00:04:57.295    TEST_HEADER include/spdk/scsi_spec.h
00:04:57.295    LINK interrupt_tgt
00:04:57.295    TEST_HEADER include/spdk/sock.h
00:04:57.295    TEST_HEADER include/spdk/stdinc.h
00:04:57.295    TEST_HEADER include/spdk/string.h
00:04:57.295    TEST_HEADER include/spdk/thread.h
00:04:57.295    TEST_HEADER include/spdk/trace.h
00:04:57.295    TEST_HEADER include/spdk/trace_parser.h
00:04:57.295    TEST_HEADER include/spdk/tree.h
00:04:57.295    TEST_HEADER include/spdk/ublk.h
00:04:57.295    TEST_HEADER include/spdk/util.h
00:04:57.295    TEST_HEADER include/spdk/uuid.h
00:04:57.295    TEST_HEADER include/spdk/version.h
00:04:57.295    TEST_HEADER include/spdk/vfio_user_pci.h
00:04:57.295    TEST_HEADER include/spdk/vfio_user_spec.h
00:04:57.295    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:04:57.295    TEST_HEADER include/spdk/vhost.h
00:04:57.295    TEST_HEADER include/spdk/vmd.h
00:04:57.295    TEST_HEADER include/spdk/xor.h
00:04:57.295    TEST_HEADER include/spdk/zipf.h
00:04:57.295    CXX test/cpp_headers/accel.o
00:04:57.295    LINK verify
00:04:57.553    CXX test/cpp_headers/accel_module.o
00:04:57.553    LINK bdev_svc
00:04:57.553    CXX test/cpp_headers/assert.o
00:04:57.553    LINK spdk_nvme_perf
00:04:57.811    CC app/spdk_dd/spdk_dd.o
00:04:57.811    CXX test/cpp_headers/barrier.o
00:04:58.070    CXX test/cpp_headers/base64.o
00:04:58.070    CC examples/thread/thread/thread_ex.o
00:04:58.070    CC test/app/histogram_perf/histogram_perf.o
00:04:58.070    LINK test_dma
00:04:58.070    CC examples/sock/hello_world/hello_sock.o
00:04:58.328    LINK nvme_fuzz
00:04:58.328    CXX test/cpp_headers/bdev.o
00:04:58.328    CC test/app/jsoncat/jsoncat.o
00:04:58.328    LINK histogram_perf
00:04:58.328    CXX test/cpp_headers/bdev_module.o
00:04:58.587    LINK spdk_nvme_identify
00:04:58.587    LINK thread
00:04:58.587    LINK jsoncat
00:04:58.587    LINK spdk_top
00:04:58.587    LINK spdk_dd
00:04:58.587    LINK hello_sock
00:04:58.587    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:04:58.845    CXX test/cpp_headers/bdev_zone.o
00:04:58.845    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:04:58.845    CXX test/cpp_headers/bit_array.o
00:04:59.103    CC test/event/event_perf/event_perf.o
00:04:59.103    CC test/event/reactor/reactor.o
00:04:59.103    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:04:59.103    CC test/event/reactor_perf/reactor_perf.o
00:04:59.103    CC test/env/mem_callbacks/mem_callbacks.o
00:04:59.103    CXX test/cpp_headers/bit_pool.o
00:04:59.103    LINK reactor
00:04:59.362    CC examples/vmd/lsvmd/lsvmd.o
00:04:59.362    LINK event_perf
00:04:59.362    CC test/event/app_repeat/app_repeat.o
00:04:59.362    LINK reactor_perf
00:04:59.362    CC app/fio/nvme/fio_plugin.o
00:04:59.362    CXX test/cpp_headers/blob_bdev.o
00:04:59.621    LINK lsvmd
00:04:59.621    LINK app_repeat
00:04:59.621    CC app/fio/bdev/fio_plugin.o
00:04:59.621    CXX test/cpp_headers/blobfs_bdev.o
00:04:59.879    CC examples/vmd/led/led.o
00:04:59.879    CXX test/cpp_headers/blobfs.o
00:04:59.879    LINK vhost_fuzz
00:05:00.137    CC test/app/stub/stub.o
00:05:00.137    LINK mem_callbacks
00:05:00.137    LINK led
00:05:00.137    CXX test/cpp_headers/blob.o
00:05:00.137    CC test/event/scheduler/scheduler.o
00:05:00.137    LINK spdk_nvme
00:05:00.137    CC test/rpc_client/rpc_client_test.o
00:05:00.395    CC test/nvme/aer/aer.o
00:05:00.396    CC test/env/vtophys/vtophys.o
00:05:00.396    CXX test/cpp_headers/conf.o
00:05:00.396    LINK stub
00:05:00.396    CXX test/cpp_headers/config.o
00:05:00.396    LINK scheduler
00:05:00.654    LINK spdk_bdev
00:05:00.654    LINK rpc_client_test
00:05:00.654    CC examples/idxd/perf/perf.o
00:05:00.654    LINK vtophys
00:05:00.654    CXX test/cpp_headers/cpuset.o
00:05:00.654    CC test/accel/dif/dif.o
00:05:00.916    CXX test/cpp_headers/crc16.o
00:05:00.916    CXX test/cpp_headers/crc32.o
00:05:00.916    CC app/vhost/vhost.o
00:05:00.916    LINK aer
00:05:00.916    CC test/blobfs/mkfs/mkfs.o
00:05:01.179    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:01.179    CC examples/fsdev/hello_world/hello_fsdev.o
00:05:01.179    CXX test/cpp_headers/crc64.o
00:05:01.179    LINK mkfs
00:05:01.179    CC test/env/memory/memory_ut.o
00:05:01.179    LINK vhost
00:05:01.438    CXX test/cpp_headers/dif.o
00:05:01.438    LINK idxd_perf
00:05:01.438    LINK env_dpdk_post_init
00:05:01.438    CC test/nvme/reset/reset.o
00:05:01.749    CXX test/cpp_headers/dma.o
00:05:01.749    LINK hello_fsdev
00:05:01.749    CC test/env/pci/pci_ut.o
00:05:02.028    CC examples/accel/perf/accel_perf.o
00:05:02.028    CC test/nvme/sgl/sgl.o
00:05:02.028    LINK dif
00:05:02.028    CXX test/cpp_headers/endian.o
00:05:02.028    LINK reset
00:05:02.028    CXX test/cpp_headers/env_dpdk.o
00:05:02.028    CC test/lvol/esnap/esnap.o
00:05:02.286    CXX test/cpp_headers/env.o
00:05:02.286    CXX test/cpp_headers/event.o
00:05:02.286    LINK pci_ut
00:05:02.544    LINK sgl
00:05:02.544    LINK iscsi_fuzz
00:05:02.544    CXX test/cpp_headers/fd_group.o
00:05:02.544    CC examples/nvme/hello_world/hello_world.o
00:05:02.544    CC examples/blob/cli/blobcli.o
00:05:02.544    CC examples/blob/hello_world/hello_blob.o
00:05:02.802    CC test/nvme/e2edp/nvme_dp.o
00:05:02.802    CXX test/cpp_headers/fd.o
00:05:02.802    CC examples/nvme/reconnect/reconnect.o
00:05:03.060    CXX test/cpp_headers/file.o
00:05:03.060    LINK hello_blob
00:05:03.060    LINK accel_perf
00:05:03.060    CC test/bdev/bdevio/bdevio.o
00:05:03.060    LINK memory_ut
00:05:03.060    LINK hello_world
00:05:03.060    LINK nvme_dp
00:05:03.060    CXX test/cpp_headers/fsdev.o
00:05:03.318    LINK reconnect
00:05:03.318    CXX test/cpp_headers/fsdev_module.o
00:05:03.318    CC examples/nvme/nvme_manage/nvme_manage.o
00:05:03.318    CC test/nvme/overhead/overhead.o
00:05:03.318    CC examples/nvme/arbitration/arbitration.o
00:05:03.575    CC test/nvme/err_injection/err_injection.o
00:05:03.575    LINK bdevio
00:05:03.575    LINK blobcli
00:05:03.575    CC examples/bdev/hello_world/hello_bdev.o
00:05:03.575    CXX test/cpp_headers/ftl.o
00:05:03.833    CC examples/nvme/hotplug/hotplug.o
00:05:03.833    LINK overhead
00:05:03.833    LINK hello_bdev
00:05:03.833    CXX test/cpp_headers/fuse_dispatcher.o
00:05:03.833    LINK err_injection
00:05:03.833    CC test/nvme/startup/startup.o
00:05:04.090    LINK arbitration
00:05:04.090    LINK nvme_manage
00:05:04.090    LINK hotplug
00:05:04.090    CXX test/cpp_headers/gpt_spec.o
00:05:04.090    LINK startup
00:05:04.090    CC test/nvme/reserve/reserve.o
00:05:04.347    CC test/nvme/simple_copy/simple_copy.o
00:05:04.347    CC test/nvme/connect_stress/connect_stress.o
00:05:04.347    CC examples/bdev/bdevperf/bdevperf.o
00:05:04.347    CC examples/nvme/cmb_copy/cmb_copy.o
00:05:04.348    CC examples/nvme/abort/abort.o
00:05:04.348    CXX test/cpp_headers/hexlify.o
00:05:04.605    CC test/nvme/boot_partition/boot_partition.o
00:05:04.605    CC test/nvme/compliance/nvme_compliance.o
00:05:04.605    LINK reserve
00:05:04.605    LINK cmb_copy
00:05:04.605    LINK connect_stress
00:05:04.862    CXX test/cpp_headers/histogram_data.o
00:05:04.862    LINK boot_partition
00:05:04.862    LINK simple_copy
00:05:04.862    CXX test/cpp_headers/idxd.o
00:05:04.862    CC test/nvme/fused_ordering/fused_ordering.o
00:05:05.120    LINK abort
00:05:05.120    CXX test/cpp_headers/idxd_spec.o
00:05:05.120    CC test/nvme/doorbell_aers/doorbell_aers.o
00:05:05.120    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:05:05.120    CC test/nvme/fdp/fdp.o
00:05:05.120    LINK fused_ordering
00:05:05.120    CC test/nvme/cuse/cuse.o
00:05:05.120    CXX test/cpp_headers/init.o
00:05:05.376    LINK nvme_compliance
00:05:05.376    CXX test/cpp_headers/ioat.o
00:05:05.376    LINK doorbell_aers
00:05:05.376    LINK pmr_persistence
00:05:05.376    CXX test/cpp_headers/ioat_spec.o
00:05:05.376    CXX test/cpp_headers/iscsi_spec.o
00:05:05.376    CXX test/cpp_headers/json.o
00:05:05.376    CXX test/cpp_headers/jsonrpc.o
00:05:05.376    CXX test/cpp_headers/keyring.o
00:05:05.376    LINK bdevperf
00:05:05.376    CXX test/cpp_headers/keyring_module.o
00:05:05.633    CXX test/cpp_headers/likely.o
00:05:05.633    CXX test/cpp_headers/log.o
00:05:05.633    CXX test/cpp_headers/lvol.o
00:05:05.633    LINK fdp
00:05:05.633    CXX test/cpp_headers/md5.o
00:05:05.633    CXX test/cpp_headers/memory.o
00:05:05.633    CXX test/cpp_headers/mmio.o
00:05:05.633    CXX test/cpp_headers/nbd.o
00:05:05.891    CXX test/cpp_headers/net.o
00:05:05.891    CXX test/cpp_headers/notify.o
00:05:05.891    CXX test/cpp_headers/nvme.o
00:05:05.891    CXX test/cpp_headers/nvme_intel.o
00:05:05.891    CXX test/cpp_headers/nvme_ocssd.o
00:05:05.891    CXX test/cpp_headers/nvme_ocssd_spec.o
00:05:05.891    CXX test/cpp_headers/nvme_spec.o
00:05:05.891    CXX test/cpp_headers/nvme_zns.o
00:05:05.891    CXX test/cpp_headers/nvmf_cmd.o
00:05:05.891    CXX test/cpp_headers/nvmf_fc_spec.o
00:05:05.891    CC examples/nvmf/nvmf/nvmf.o
00:05:06.149    CXX test/cpp_headers/nvmf.o
00:05:06.149    CXX test/cpp_headers/nvmf_spec.o
00:05:06.149    CXX test/cpp_headers/nvmf_transport.o
00:05:06.149    CXX test/cpp_headers/opal.o
00:05:06.149    CXX test/cpp_headers/opal_spec.o
00:05:06.149    CXX test/cpp_headers/pci_ids.o
00:05:06.408    CXX test/cpp_headers/pipe.o
00:05:06.408    CXX test/cpp_headers/queue.o
00:05:06.408    CXX test/cpp_headers/reduce.o
00:05:06.408    CXX test/cpp_headers/rpc.o
00:05:06.408    CXX test/cpp_headers/scheduler.o
00:05:06.408    CXX test/cpp_headers/scsi.o
00:05:06.408    CXX test/cpp_headers/scsi_spec.o
00:05:06.666    CXX test/cpp_headers/sock.o
00:05:06.666    CXX test/cpp_headers/stdinc.o
00:05:06.666    CXX test/cpp_headers/string.o
00:05:06.666    LINK nvmf
00:05:06.666    CXX test/cpp_headers/thread.o
00:05:06.666    CXX test/cpp_headers/trace.o
00:05:06.666    CXX test/cpp_headers/trace_parser.o
00:05:06.666    CXX test/cpp_headers/tree.o
00:05:06.924    CXX test/cpp_headers/ublk.o
00:05:06.924    CXX test/cpp_headers/util.o
00:05:06.924    CXX test/cpp_headers/uuid.o
00:05:06.924    CXX test/cpp_headers/version.o
00:05:06.924    CXX test/cpp_headers/vfio_user_pci.o
00:05:06.924    CXX test/cpp_headers/vfio_user_spec.o
00:05:06.924    CXX test/cpp_headers/vhost.o
00:05:06.924    CXX test/cpp_headers/vmd.o
00:05:06.924    CXX test/cpp_headers/xor.o
00:05:06.924    CXX test/cpp_headers/zipf.o
00:05:07.182    LINK cuse
00:05:10.493    LINK esnap
00:05:10.493  
00:05:10.493  real	2m19.861s
00:05:10.493  user	13m46.532s
00:05:10.493  sys	2m12.146s
00:05:10.493   14:15:49 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:10.493  ************************************
00:05:10.493  END TEST make
00:05:10.493  ************************************
00:05:10.493   14:15:49 make -- common/autotest_common.sh@10 -- $ set +x
00:05:10.493   14:15:49  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:05:10.493   14:15:49  -- pm/common@29 -- $ signal_monitor_resources TERM
00:05:10.493   14:15:49  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:05:10.493   14:15:49  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:10.493   14:15:49  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:05:10.493   14:15:49  -- pm/common@44 -- $ pid=5336
00:05:10.493   14:15:49  -- pm/common@50 -- $ kill -TERM 5336
00:05:10.493   14:15:49  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:10.493   14:15:49  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:05:10.493   14:15:49  -- pm/common@44 -- $ pid=5338
00:05:10.493   14:15:49  -- pm/common@50 -- $ kill -TERM 5338
00:05:10.493   14:15:49  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:05:10.493   14:15:49  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:05:10.493    14:15:49  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:10.493     14:15:49  -- common/autotest_common.sh@1693 -- # lcov --version
00:05:10.493     14:15:49  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:10.752    14:15:49  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:10.752    14:15:49  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:10.752    14:15:49  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:10.752    14:15:49  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:10.752    14:15:49  -- scripts/common.sh@336 -- # IFS=.-:
00:05:10.752    14:15:49  -- scripts/common.sh@336 -- # read -ra ver1
00:05:10.752    14:15:49  -- scripts/common.sh@337 -- # IFS=.-:
00:05:10.752    14:15:49  -- scripts/common.sh@337 -- # read -ra ver2
00:05:10.752    14:15:49  -- scripts/common.sh@338 -- # local 'op=<'
00:05:10.752    14:15:49  -- scripts/common.sh@340 -- # ver1_l=2
00:05:10.752    14:15:49  -- scripts/common.sh@341 -- # ver2_l=1
00:05:10.752    14:15:49  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:10.752    14:15:49  -- scripts/common.sh@344 -- # case "$op" in
00:05:10.752    14:15:49  -- scripts/common.sh@345 -- # : 1
00:05:10.752    14:15:49  -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:10.752    14:15:49  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:10.752     14:15:49  -- scripts/common.sh@365 -- # decimal 1
00:05:10.752     14:15:49  -- scripts/common.sh@353 -- # local d=1
00:05:10.752     14:15:49  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:10.752     14:15:49  -- scripts/common.sh@355 -- # echo 1
00:05:10.752    14:15:49  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:10.752     14:15:49  -- scripts/common.sh@366 -- # decimal 2
00:05:10.752     14:15:49  -- scripts/common.sh@353 -- # local d=2
00:05:10.753     14:15:49  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:10.753     14:15:49  -- scripts/common.sh@355 -- # echo 2
00:05:10.753    14:15:49  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:10.753    14:15:49  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:10.753    14:15:49  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:10.753    14:15:49  -- scripts/common.sh@368 -- # return 0
00:05:10.753    14:15:49  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:10.753    14:15:49  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:10.753  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.753  		--rc genhtml_branch_coverage=1
00:05:10.753  		--rc genhtml_function_coverage=1
00:05:10.753  		--rc genhtml_legend=1
00:05:10.753  		--rc geninfo_all_blocks=1
00:05:10.753  		--rc geninfo_unexecuted_blocks=1
00:05:10.753  		
00:05:10.753  		'
00:05:10.753    14:15:49  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:10.753  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.753  		--rc genhtml_branch_coverage=1
00:05:10.753  		--rc genhtml_function_coverage=1
00:05:10.753  		--rc genhtml_legend=1
00:05:10.753  		--rc geninfo_all_blocks=1
00:05:10.753  		--rc geninfo_unexecuted_blocks=1
00:05:10.753  		
00:05:10.753  		'
00:05:10.753    14:15:49  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:10.753  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.753  		--rc genhtml_branch_coverage=1
00:05:10.753  		--rc genhtml_function_coverage=1
00:05:10.753  		--rc genhtml_legend=1
00:05:10.753  		--rc geninfo_all_blocks=1
00:05:10.753  		--rc geninfo_unexecuted_blocks=1
00:05:10.753  		
00:05:10.753  		'
00:05:10.753    14:15:49  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:10.753  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.753  		--rc genhtml_branch_coverage=1
00:05:10.753  		--rc genhtml_function_coverage=1
00:05:10.753  		--rc genhtml_legend=1
00:05:10.753  		--rc geninfo_all_blocks=1
00:05:10.753  		--rc geninfo_unexecuted_blocks=1
00:05:10.753  		
00:05:10.753  		'
00:05:10.753   14:15:49  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:05:10.753     14:15:49  -- nvmf/common.sh@7 -- # uname -s
00:05:10.753    14:15:49  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:10.753    14:15:49  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:10.753    14:15:49  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:10.753    14:15:49  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:10.753    14:15:49  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:10.753    14:15:49  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:10.753    14:15:49  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:10.753    14:15:49  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:10.753    14:15:49  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:10.753     14:15:49  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:10.753    14:15:49  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d23fef63-b4ba-422a-867d-7e27affacb1a
00:05:10.753    14:15:49  -- nvmf/common.sh@18 -- # NVME_HOSTID=d23fef63-b4ba-422a-867d-7e27affacb1a
00:05:10.753    14:15:49  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:10.753    14:15:49  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:10.753    14:15:49  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:10.753    14:15:49  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:10.753    14:15:49  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:10.753     14:15:49  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:10.753     14:15:49  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:10.753     14:15:49  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:10.753     14:15:49  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:10.753      14:15:49  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:10.753      14:15:49  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:10.753      14:15:49  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:10.753      14:15:49  -- paths/export.sh@5 -- # export PATH
00:05:10.753      14:15:49  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:05:10.753    14:15:49  -- nvmf/common.sh@51 -- # : 0
00:05:10.753    14:15:49  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:10.753    14:15:49  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:10.753    14:15:49  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:10.753    14:15:49  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:10.753    14:15:49  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:10.753    14:15:49  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:10.753  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:10.753    14:15:49  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:10.753    14:15:49  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:10.753    14:15:49  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:10.753   14:15:49  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:10.753    14:15:49  -- spdk/autotest.sh@32 -- # uname -s
00:05:10.753   14:15:49  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:10.753   14:15:49  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:05:10.753   14:15:49  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:10.753   14:15:49  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:05:10.753   14:15:49  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:10.753   14:15:49  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:10.753    14:15:49  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:10.753   14:15:49  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:05:10.753   14:15:49  -- spdk/autotest.sh@48 -- # udevadm_pid=55291
00:05:10.753   14:15:49  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:05:10.753   14:15:49  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:10.753   14:15:49  -- pm/common@17 -- # local monitor
00:05:10.753   14:15:49  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:10.753    14:15:49  -- pm/common@21 -- # date +%s
00:05:10.753   14:15:49  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:10.753   14:15:49  -- pm/common@25 -- # sleep 1
00:05:10.753    14:15:49  -- pm/common@21 -- # date +%s
00:05:10.753   14:15:49  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732112149
00:05:10.753   14:15:49  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732112149
00:05:10.753  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732112149_collect-cpu-load.pm.log
00:05:10.753  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732112149_collect-vmstat.pm.log
00:05:11.687   14:15:50  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:05:11.687   14:15:50  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:05:11.687   14:15:50  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:11.687   14:15:50  -- common/autotest_common.sh@10 -- # set +x
00:05:11.687   14:15:50  -- spdk/autotest.sh@59 -- # create_test_list
00:05:11.687   14:15:50  -- common/autotest_common.sh@752 -- # xtrace_disable
00:05:11.687   14:15:50  -- common/autotest_common.sh@10 -- # set +x
00:05:11.946     14:15:50  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:05:11.946    14:15:50  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:05:11.946   14:15:50  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:05:11.946   14:15:50  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:05:11.946   14:15:50  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:05:11.946   14:15:50  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:05:11.946    14:15:50  -- common/autotest_common.sh@1457 -- # uname
00:05:11.946   14:15:50  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:05:11.946   14:15:50  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:05:11.946    14:15:50  -- common/autotest_common.sh@1477 -- # uname
00:05:11.946   14:15:50  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:05:11.946   14:15:50  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:05:11.946   14:15:50  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:05:11.946  lcov: LCOV version 1.15
00:05:11.946   14:15:50  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:05:30.032  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:05:30.032  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:05:48.125   14:16:24  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:05:48.125   14:16:24  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:48.125   14:16:24  -- common/autotest_common.sh@10 -- # set +x
00:05:48.125   14:16:24  -- spdk/autotest.sh@78 -- # rm -f
00:05:48.125   14:16:24  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:48.125  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:05:48.125  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:05:48.125  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:05:48.125  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:05:48.125  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:05:48.125   14:16:26  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:05:48.125   14:16:26  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:05:48.125   14:16:26  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:05:48.125   14:16:26  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme2n2
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme2n3
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:05:48.125   14:16:26  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1650 -- # local device=nvme3n1
00:05:48.125   14:16:26  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]]
00:05:48.125   14:16:26  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:05:48.125   14:16:26  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:05:48.125   14:16:26  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:48.125   14:16:26  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:48.125   14:16:26  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:05:48.125   14:16:26  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:05:48.125   14:16:26  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:05:48.125  No valid GPT data, bailing
00:05:48.125    14:16:26  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:05:48.125   14:16:26  -- scripts/common.sh@394 -- # pt=
00:05:48.125   14:16:26  -- scripts/common.sh@395 -- # return 1
00:05:48.125   14:16:26  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:05:48.125  1+0 records in
00:05:48.125  1+0 records out
00:05:48.125  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115996 s, 90.4 MB/s
00:05:48.125   14:16:26  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:48.125   14:16:26  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:48.125   14:16:26  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1
00:05:48.125   14:16:26  -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt
00:05:48.125   14:16:26  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1
00:05:48.125  No valid GPT data, bailing
00:05:48.125    14:16:26  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1
00:05:48.125   14:16:26  -- scripts/common.sh@394 -- # pt=
00:05:48.125   14:16:26  -- scripts/common.sh@395 -- # return 1
00:05:48.125   14:16:26  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1
00:05:48.125  1+0 records in
00:05:48.125  1+0 records out
00:05:48.125  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00353012 s, 297 MB/s
00:05:48.125   14:16:26  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:48.125   14:16:26  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:48.125   14:16:26  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1
00:05:48.125   14:16:26  -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt
00:05:48.125   14:16:26  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1
00:05:48.125  No valid GPT data, bailing
00:05:48.125    14:16:26  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1
00:05:48.125   14:16:26  -- scripts/common.sh@394 -- # pt=
00:05:48.125   14:16:26  -- scripts/common.sh@395 -- # return 1
00:05:48.125   14:16:26  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1
00:05:48.125  1+0 records in
00:05:48.125  1+0 records out
00:05:48.125  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00377474 s, 278 MB/s
00:05:48.125   14:16:26  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:48.125   14:16:26  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:48.125   14:16:26  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2
00:05:48.125   14:16:26  -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt
00:05:48.125   14:16:26  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2
00:05:48.125  No valid GPT data, bailing
00:05:48.125    14:16:26  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2
00:05:48.125   14:16:26  -- scripts/common.sh@394 -- # pt=
00:05:48.125   14:16:26  -- scripts/common.sh@395 -- # return 1
00:05:48.125   14:16:26  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1
00:05:48.125  1+0 records in
00:05:48.126  1+0 records out
00:05:48.126  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00443443 s, 236 MB/s
00:05:48.126   14:16:26  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:48.126   14:16:26  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:48.126   14:16:26  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3
00:05:48.126   14:16:26  -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt
00:05:48.126   14:16:26  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3
00:05:48.126  No valid GPT data, bailing
00:05:48.126    14:16:26  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3
00:05:48.126   14:16:26  -- scripts/common.sh@394 -- # pt=
00:05:48.126   14:16:26  -- scripts/common.sh@395 -- # return 1
00:05:48.126   14:16:26  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1
00:05:48.126  1+0 records in
00:05:48.126  1+0 records out
00:05:48.126  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419484 s, 250 MB/s
00:05:48.126   14:16:26  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:05:48.126   14:16:26  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:05:48.126   14:16:26  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1
00:05:48.126   14:16:26  -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt
00:05:48.126   14:16:26  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1
00:05:48.126  No valid GPT data, bailing
00:05:48.126    14:16:26  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1
00:05:48.126   14:16:26  -- scripts/common.sh@394 -- # pt=
00:05:48.126   14:16:26  -- scripts/common.sh@395 -- # return 1
00:05:48.126   14:16:26  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1
00:05:48.126  1+0 records in
00:05:48.126  1+0 records out
00:05:48.126  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00291903 s, 359 MB/s
00:05:48.126   14:16:26  -- spdk/autotest.sh@105 -- # sync
00:05:48.126   14:16:26  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:05:48.126   14:16:26  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:05:48.126    14:16:26  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:05:50.089    14:16:28  -- spdk/autotest.sh@111 -- # uname -s
00:05:50.089   14:16:28  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:05:50.089   14:16:28  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:05:50.089   14:16:28  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:05:50.347  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:05:50.914  Hugepages
00:05:50.914  node     hugesize     free /  total
00:05:50.914  node0   1048576kB        0 /      0
00:05:50.914  node0      2048kB        0 /      0
00:05:50.914  
00:05:50.914  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:05:50.914  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:05:50.914  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:05:51.171  NVMe                      0000:00:11.0    1b36   0010   unknown nvme             nvme1      nvme1n1
00:05:51.171  NVMe                      0000:00:12.0    1b36   0010   unknown nvme             nvme2      nvme2n1 nvme2n2 nvme2n3
00:05:51.171  NVMe                      0000:00:13.0    1b36   0010   unknown nvme             nvme3      nvme3n1
00:05:51.171    14:16:30  -- spdk/autotest.sh@117 -- # uname -s
00:05:51.171   14:16:30  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:05:51.171   14:16:30  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:05:51.171   14:16:30  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:05:51.736  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:05:52.303  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:05:52.303  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:05:52.303  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:05:52.303  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:05:52.562   14:16:31  -- common/autotest_common.sh@1517 -- # sleep 1
00:05:53.495   14:16:32  -- common/autotest_common.sh@1518 -- # bdfs=()
00:05:53.495   14:16:32  -- common/autotest_common.sh@1518 -- # local bdfs
00:05:53.495   14:16:32  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:05:53.495    14:16:32  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:05:53.495    14:16:32  -- common/autotest_common.sh@1498 -- # bdfs=()
00:05:53.495    14:16:32  -- common/autotest_common.sh@1498 -- # local bdfs
00:05:53.495    14:16:32  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:53.495     14:16:32  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:05:53.495     14:16:32  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:05:53.495    14:16:32  -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:05:53.495    14:16:32  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:05:53.495   14:16:32  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:05:53.753  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:05:54.011  Waiting for block devices as requested
00:05:54.011  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:05:54.269  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:05:54.269  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:05:54.269  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:05:59.537  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:05:59.537   14:16:38  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:05:59.537    14:16:38  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:05:59.537     14:16:38  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:05:59.537     14:16:38  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:05:59.537    14:16:38  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:05:59.537    14:16:38  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]]
00:05:59.537     14:16:38  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1
00:05:59.537    14:16:38  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1
00:05:59.537   14:16:38  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1
00:05:59.537   14:16:38  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]]
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # grep oacs
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:05:59.537   14:16:38  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:05:59.537   14:16:38  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:05:59.537    14:16:38  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1
00:05:59.537    14:16:38  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:05:59.537    14:16:38  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:05:59.537   14:16:38  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:05:59.537   14:16:38  -- common/autotest_common.sh@1543 -- # continue
00:05:59.537   14:16:38  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:05:59.537    14:16:38  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0
00:05:59.537     14:16:38  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:05:59.537     14:16:38  -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme
00:05:59.537    14:16:38  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:05:59.537    14:16:38  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]]
00:05:59.537     14:16:38  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0
00:05:59.537    14:16:38  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:05:59.537   14:16:38  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:05:59.537   14:16:38  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # grep oacs
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:05:59.537   14:16:38  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:05:59.537   14:16:38  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:05:59.537    14:16:38  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:05:59.537    14:16:38  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:05:59.537    14:16:38  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:05:59.537   14:16:38  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:05:59.537   14:16:38  -- common/autotest_common.sh@1543 -- # continue
00:05:59.537   14:16:38  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:05:59.537    14:16:38  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0
00:05:59.537     14:16:38  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:05:59.537     14:16:38  -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme
00:05:59.537    14:16:38  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2
00:05:59.537    14:16:38  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]]
00:05:59.537     14:16:38  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2
00:05:59.537    14:16:38  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]]
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # grep oacs
00:05:59.537    14:16:38  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:05:59.537   14:16:38  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:05:59.538   14:16:38  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:05:59.538   14:16:38  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:05:59.538    14:16:38  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2
00:05:59.538    14:16:38  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:05:59.538    14:16:38  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:59.538   14:16:38  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:05:59.538   14:16:38  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:05:59.538   14:16:38  -- common/autotest_common.sh@1543 -- # continue
00:05:59.538   14:16:38  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:05:59.538    14:16:38  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0
00:05:59.538     14:16:38  -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme
00:05:59.538     14:16:38  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3
00:05:59.538    14:16:38  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3
00:05:59.538    14:16:38  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]]
00:05:59.538     14:16:38  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3
00:05:59.538    14:16:38  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3
00:05:59.538   14:16:38  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3
00:05:59.538   14:16:38  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]]
00:05:59.538    14:16:38  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:05:59.538    14:16:38  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3
00:05:59.538    14:16:38  -- common/autotest_common.sh@1531 -- # grep oacs
00:05:59.538   14:16:38  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:05:59.538   14:16:38  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:05:59.538   14:16:38  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:05:59.538    14:16:38  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:59.538    14:16:38  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3
00:05:59.538    14:16:38  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:05:59.538   14:16:38  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:05:59.538   14:16:38  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:05:59.538   14:16:38  -- common/autotest_common.sh@1543 -- # continue
00:05:59.538   14:16:38  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:05:59.538   14:16:38  -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:59.538   14:16:38  -- common/autotest_common.sh@10 -- # set +x
00:05:59.538   14:16:38  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:05:59.538   14:16:38  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:59.538   14:16:38  -- common/autotest_common.sh@10 -- # set +x
00:05:59.538   14:16:38  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:00.105  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:06:00.672  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:00.672  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:06:00.672  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:06:00.672  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:06:00.930   14:16:39  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:00.930   14:16:39  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:00.930   14:16:39  -- common/autotest_common.sh@10 -- # set +x
00:06:00.930   14:16:39  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:00.930   14:16:39  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:00.930    14:16:39  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:00.930    14:16:39  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:00.930    14:16:39  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:00.930    14:16:39  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:00.930    14:16:39  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:00.930     14:16:39  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:00.930     14:16:39  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:00.930     14:16:39  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:00.931     14:16:39  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:00.931      14:16:39  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:00.931      14:16:39  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:00.931     14:16:39  -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:06:00.931     14:16:39  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:06:00.931    14:16:39  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:00.931     14:16:39  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:06:00.931    14:16:39  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:00.931    14:16:39  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:00.931    14:16:39  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:00.931     14:16:39  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device
00:06:00.931    14:16:39  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:00.931    14:16:39  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:00.931    14:16:39  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:00.931     14:16:39  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device
00:06:00.931    14:16:39  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:00.931    14:16:39  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:00.931    14:16:39  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:00.931     14:16:39  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device
00:06:00.931    14:16:39  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:00.931    14:16:39  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:00.931    14:16:39  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:06:00.931    14:16:39  -- common/autotest_common.sh@1572 -- # return 0
00:06:00.931   14:16:39  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:06:00.931   14:16:39  -- common/autotest_common.sh@1580 -- # return 0
00:06:00.931   14:16:39  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:06:00.931   14:16:39  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:06:00.931   14:16:39  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:00.931   14:16:39  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:06:00.931   14:16:39  -- spdk/autotest.sh@149 -- # timing_enter lib
00:06:00.931   14:16:39  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:00.931   14:16:39  -- common/autotest_common.sh@10 -- # set +x
00:06:00.931   14:16:39  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:06:00.931   14:16:39  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:00.931   14:16:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:00.931   14:16:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:00.931   14:16:39  -- common/autotest_common.sh@10 -- # set +x
00:06:00.931  ************************************
00:06:00.931  START TEST env
00:06:00.931  ************************************
00:06:00.931   14:16:39 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:06:01.205  * Looking for test storage...
00:06:01.205  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:06:01.205    14:16:39 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:01.205     14:16:39 env -- common/autotest_common.sh@1693 -- # lcov --version
00:06:01.205     14:16:39 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:01.205    14:16:40 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:01.205    14:16:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:01.205    14:16:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:01.205    14:16:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:01.205    14:16:40 env -- scripts/common.sh@336 -- # IFS=.-:
00:06:01.205    14:16:40 env -- scripts/common.sh@336 -- # read -ra ver1
00:06:01.205    14:16:40 env -- scripts/common.sh@337 -- # IFS=.-:
00:06:01.205    14:16:40 env -- scripts/common.sh@337 -- # read -ra ver2
00:06:01.205    14:16:40 env -- scripts/common.sh@338 -- # local 'op=<'
00:06:01.205    14:16:40 env -- scripts/common.sh@340 -- # ver1_l=2
00:06:01.205    14:16:40 env -- scripts/common.sh@341 -- # ver2_l=1
00:06:01.205    14:16:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:01.205    14:16:40 env -- scripts/common.sh@344 -- # case "$op" in
00:06:01.205    14:16:40 env -- scripts/common.sh@345 -- # : 1
00:06:01.205    14:16:40 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:01.205    14:16:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:01.205     14:16:40 env -- scripts/common.sh@365 -- # decimal 1
00:06:01.205     14:16:40 env -- scripts/common.sh@353 -- # local d=1
00:06:01.205     14:16:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:01.205     14:16:40 env -- scripts/common.sh@355 -- # echo 1
00:06:01.205    14:16:40 env -- scripts/common.sh@365 -- # ver1[v]=1
00:06:01.205     14:16:40 env -- scripts/common.sh@366 -- # decimal 2
00:06:01.205     14:16:40 env -- scripts/common.sh@353 -- # local d=2
00:06:01.205     14:16:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:01.205     14:16:40 env -- scripts/common.sh@355 -- # echo 2
00:06:01.205    14:16:40 env -- scripts/common.sh@366 -- # ver2[v]=2
00:06:01.205    14:16:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:01.205    14:16:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:01.205    14:16:40 env -- scripts/common.sh@368 -- # return 0
00:06:01.205    14:16:40 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:01.205    14:16:40 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:01.205  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.205  		--rc genhtml_branch_coverage=1
00:06:01.205  		--rc genhtml_function_coverage=1
00:06:01.205  		--rc genhtml_legend=1
00:06:01.205  		--rc geninfo_all_blocks=1
00:06:01.205  		--rc geninfo_unexecuted_blocks=1
00:06:01.205  		
00:06:01.205  		'
00:06:01.205    14:16:40 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:01.205  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.205  		--rc genhtml_branch_coverage=1
00:06:01.205  		--rc genhtml_function_coverage=1
00:06:01.205  		--rc genhtml_legend=1
00:06:01.205  		--rc geninfo_all_blocks=1
00:06:01.205  		--rc geninfo_unexecuted_blocks=1
00:06:01.205  		
00:06:01.205  		'
00:06:01.205    14:16:40 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:01.205  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.205  		--rc genhtml_branch_coverage=1
00:06:01.205  		--rc genhtml_function_coverage=1
00:06:01.205  		--rc genhtml_legend=1
00:06:01.205  		--rc geninfo_all_blocks=1
00:06:01.205  		--rc geninfo_unexecuted_blocks=1
00:06:01.205  		
00:06:01.205  		'
00:06:01.205    14:16:40 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:01.205  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:01.205  		--rc genhtml_branch_coverage=1
00:06:01.205  		--rc genhtml_function_coverage=1
00:06:01.205  		--rc genhtml_legend=1
00:06:01.205  		--rc geninfo_all_blocks=1
00:06:01.205  		--rc geninfo_unexecuted_blocks=1
00:06:01.205  		
00:06:01.205  		'
00:06:01.205   14:16:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:01.205   14:16:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:01.205   14:16:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:01.205   14:16:40 env -- common/autotest_common.sh@10 -- # set +x
00:06:01.205  ************************************
00:06:01.205  START TEST env_memory
00:06:01.205  ************************************
00:06:01.205   14:16:40 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:06:01.205  
00:06:01.205  
00:06:01.205       CUnit - A unit testing framework for C - Version 2.1-3
00:06:01.205       http://cunit.sourceforge.net/
00:06:01.205  
00:06:01.205  
00:06:01.205  Suite: memory
00:06:01.205    Test: alloc and free memory map ...[2024-11-20 14:16:40.150781] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:06:01.463  passed
00:06:01.463    Test: mem map translation ...[2024-11-20 14:16:40.201329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:06:01.463  [2024-11-20 14:16:40.201419] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:06:01.463  [2024-11-20 14:16:40.201504] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:06:01.463  [2024-11-20 14:16:40.201532] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:06:01.463  passed
00:06:01.463    Test: mem map registration ...[2024-11-20 14:16:40.283836] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:06:01.463  [2024-11-20 14:16:40.283937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:06:01.463  passed
00:06:01.463    Test: mem map adjacent registrations ...passed
00:06:01.463  
00:06:01.463  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:01.463                suites      1      1    n/a      0        0
00:06:01.463                 tests      4      4      4      0        0
00:06:01.463               asserts    152    152    152      0      n/a
00:06:01.463  
00:06:01.463  Elapsed time =    0.289 seconds
00:06:01.463  
00:06:01.463  real	0m0.327s
00:06:01.463  user	0m0.295s
00:06:01.463  sys	0m0.025s
00:06:01.463   14:16:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:01.463  ************************************
00:06:01.463   14:16:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:06:01.463  END TEST env_memory
00:06:01.463  ************************************
00:06:01.722   14:16:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:01.722   14:16:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:01.722   14:16:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:01.722   14:16:40 env -- common/autotest_common.sh@10 -- # set +x
00:06:01.722  ************************************
00:06:01.722  START TEST env_vtophys
00:06:01.722  ************************************
00:06:01.722   14:16:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:06:01.722  EAL: lib.eal log level changed from notice to debug
00:06:01.722  EAL: Detected lcore 0 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 1 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 2 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 3 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 4 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 5 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 6 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 7 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 8 as core 0 on socket 0
00:06:01.722  EAL: Detected lcore 9 as core 0 on socket 0
00:06:01.722  EAL: Maximum logical cores by configuration: 128
00:06:01.722  EAL: Detected CPU lcores: 10
00:06:01.722  EAL: Detected NUMA nodes: 1
00:06:01.722  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:06:01.722  EAL: Detected shared linkage of DPDK
00:06:01.722  EAL: No shared files mode enabled, IPC will be disabled
00:06:01.722  EAL: Selected IOVA mode 'PA'
00:06:01.722  EAL: Probing VFIO support...
00:06:01.722  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:01.722  EAL: VFIO modules not loaded, skipping VFIO support...
00:06:01.722  EAL: Ask a virtual area of 0x2e000 bytes
00:06:01.722  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:06:01.722  EAL: Setting up physically contiguous memory...
00:06:01.722  EAL: Setting maximum number of open files to 524288
00:06:01.722  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:06:01.722  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:06:01.722  EAL: Ask a virtual area of 0x61000 bytes
00:06:01.722  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:06:01.722  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:01.722  EAL: Ask a virtual area of 0x400000000 bytes
00:06:01.722  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:06:01.722  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:06:01.722  EAL: Ask a virtual area of 0x61000 bytes
00:06:01.722  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:06:01.722  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:01.722  EAL: Ask a virtual area of 0x400000000 bytes
00:06:01.722  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:06:01.722  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:06:01.722  EAL: Ask a virtual area of 0x61000 bytes
00:06:01.722  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:06:01.722  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:01.722  EAL: Ask a virtual area of 0x400000000 bytes
00:06:01.722  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:06:01.722  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:06:01.722  EAL: Ask a virtual area of 0x61000 bytes
00:06:01.722  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:06:01.722  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:06:01.722  EAL: Ask a virtual area of 0x400000000 bytes
00:06:01.722  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:06:01.722  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:06:01.722  EAL: Hugepages will be freed exactly as allocated.
00:06:01.722  EAL: No shared files mode enabled, IPC is disabled
00:06:01.722  EAL: No shared files mode enabled, IPC is disabled
00:06:01.722  EAL: TSC frequency is ~2200000 KHz
00:06:01.722  EAL: Main lcore 0 is ready (tid=7fdc5115aa40;cpuset=[0])
00:06:01.722  EAL: Trying to obtain current memory policy.
00:06:01.722  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:01.722  EAL: Restoring previous memory policy: 0
00:06:01.722  EAL: request: mp_malloc_sync
00:06:01.722  EAL: No shared files mode enabled, IPC is disabled
00:06:01.722  EAL: Heap on socket 0 was expanded by 2MB
00:06:01.722  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:06:01.722  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:06:01.722  EAL: Mem event callback 'spdk:(nil)' registered
00:06:01.722  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:06:01.979  
00:06:01.979  
00:06:01.979       CUnit - A unit testing framework for C - Version 2.1-3
00:06:01.979       http://cunit.sourceforge.net/
00:06:01.979  
00:06:01.979  
00:06:01.979  Suite: components_suite
00:06:02.237    Test: vtophys_malloc_test ...passed
00:06:02.237    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:06:02.237  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.237  EAL: Restoring previous memory policy: 4
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was expanded by 4MB
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was shrunk by 4MB
00:06:02.237  EAL: Trying to obtain current memory policy.
00:06:02.237  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.237  EAL: Restoring previous memory policy: 4
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was expanded by 6MB
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was shrunk by 6MB
00:06:02.237  EAL: Trying to obtain current memory policy.
00:06:02.237  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.237  EAL: Restoring previous memory policy: 4
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was expanded by 10MB
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was shrunk by 10MB
00:06:02.237  EAL: Trying to obtain current memory policy.
00:06:02.237  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.237  EAL: Restoring previous memory policy: 4
00:06:02.237  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.237  EAL: request: mp_malloc_sync
00:06:02.237  EAL: No shared files mode enabled, IPC is disabled
00:06:02.237  EAL: Heap on socket 0 was expanded by 18MB
00:06:02.496  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.496  EAL: request: mp_malloc_sync
00:06:02.496  EAL: No shared files mode enabled, IPC is disabled
00:06:02.496  EAL: Heap on socket 0 was shrunk by 18MB
00:06:02.496  EAL: Trying to obtain current memory policy.
00:06:02.496  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.496  EAL: Restoring previous memory policy: 4
00:06:02.496  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.496  EAL: request: mp_malloc_sync
00:06:02.496  EAL: No shared files mode enabled, IPC is disabled
00:06:02.496  EAL: Heap on socket 0 was expanded by 34MB
00:06:02.496  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.496  EAL: request: mp_malloc_sync
00:06:02.496  EAL: No shared files mode enabled, IPC is disabled
00:06:02.496  EAL: Heap on socket 0 was shrunk by 34MB
00:06:02.496  EAL: Trying to obtain current memory policy.
00:06:02.496  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.496  EAL: Restoring previous memory policy: 4
00:06:02.496  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.496  EAL: request: mp_malloc_sync
00:06:02.496  EAL: No shared files mode enabled, IPC is disabled
00:06:02.496  EAL: Heap on socket 0 was expanded by 66MB
00:06:02.496  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.754  EAL: request: mp_malloc_sync
00:06:02.754  EAL: No shared files mode enabled, IPC is disabled
00:06:02.754  EAL: Heap on socket 0 was shrunk by 66MB
00:06:02.754  EAL: Trying to obtain current memory policy.
00:06:02.754  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:02.754  EAL: Restoring previous memory policy: 4
00:06:02.754  EAL: Calling mem event callback 'spdk:(nil)'
00:06:02.754  EAL: request: mp_malloc_sync
00:06:02.754  EAL: No shared files mode enabled, IPC is disabled
00:06:02.754  EAL: Heap on socket 0 was expanded by 130MB
00:06:03.013  EAL: Calling mem event callback 'spdk:(nil)'
00:06:03.013  EAL: request: mp_malloc_sync
00:06:03.013  EAL: No shared files mode enabled, IPC is disabled
00:06:03.013  EAL: Heap on socket 0 was shrunk by 130MB
00:06:03.013  EAL: Trying to obtain current memory policy.
00:06:03.013  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:03.272  EAL: Restoring previous memory policy: 4
00:06:03.272  EAL: Calling mem event callback 'spdk:(nil)'
00:06:03.272  EAL: request: mp_malloc_sync
00:06:03.272  EAL: No shared files mode enabled, IPC is disabled
00:06:03.272  EAL: Heap on socket 0 was expanded by 258MB
00:06:03.531  EAL: Calling mem event callback 'spdk:(nil)'
00:06:03.531  EAL: request: mp_malloc_sync
00:06:03.531  EAL: No shared files mode enabled, IPC is disabled
00:06:03.531  EAL: Heap on socket 0 was shrunk by 258MB
00:06:04.098  EAL: Trying to obtain current memory policy.
00:06:04.098  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:04.098  EAL: Restoring previous memory policy: 4
00:06:04.098  EAL: Calling mem event callback 'spdk:(nil)'
00:06:04.098  EAL: request: mp_malloc_sync
00:06:04.098  EAL: No shared files mode enabled, IPC is disabled
00:06:04.098  EAL: Heap on socket 0 was expanded by 514MB
00:06:05.035  EAL: Calling mem event callback 'spdk:(nil)'
00:06:05.035  EAL: request: mp_malloc_sync
00:06:05.035  EAL: No shared files mode enabled, IPC is disabled
00:06:05.035  EAL: Heap on socket 0 was shrunk by 514MB
00:06:05.603  EAL: Trying to obtain current memory policy.
00:06:05.603  EAL: Setting policy MPOL_PREFERRED for socket 0
00:06:05.863  EAL: Restoring previous memory policy: 4
00:06:05.863  EAL: Calling mem event callback 'spdk:(nil)'
00:06:05.863  EAL: request: mp_malloc_sync
00:06:05.863  EAL: No shared files mode enabled, IPC is disabled
00:06:05.863  EAL: Heap on socket 0 was expanded by 1026MB
00:06:07.242  EAL: Calling mem event callback 'spdk:(nil)'
00:06:07.501  EAL: request: mp_malloc_sync
00:06:07.501  EAL: No shared files mode enabled, IPC is disabled
00:06:07.501  EAL: Heap on socket 0 was shrunk by 1026MB
00:06:08.877  passed
00:06:08.877  
00:06:08.877  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:08.877                suites      1      1    n/a      0        0
00:06:08.877                 tests      2      2      2      0        0
00:06:08.877               asserts   5649   5649   5649      0      n/a
00:06:08.877  
00:06:08.877  Elapsed time =    6.949 seconds
00:06:08.877  EAL: Calling mem event callback 'spdk:(nil)'
00:06:08.877  EAL: request: mp_malloc_sync
00:06:08.877  EAL: No shared files mode enabled, IPC is disabled
00:06:08.877  EAL: Heap on socket 0 was shrunk by 2MB
00:06:08.877  EAL: No shared files mode enabled, IPC is disabled
00:06:08.877  EAL: No shared files mode enabled, IPC is disabled
00:06:08.877  EAL: No shared files mode enabled, IPC is disabled
00:06:08.877  
00:06:08.877  real	0m7.291s
00:06:08.877  user	0m6.371s
00:06:08.877  sys	0m0.754s
00:06:08.877   14:16:47 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:08.877   14:16:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:06:08.877  ************************************
00:06:08.877  END TEST env_vtophys
00:06:08.877  ************************************
00:06:08.877   14:16:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:08.877   14:16:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:08.877   14:16:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:08.877   14:16:47 env -- common/autotest_common.sh@10 -- # set +x
00:06:08.877  ************************************
00:06:08.877  START TEST env_pci
00:06:08.877  ************************************
00:06:08.877   14:16:47 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:06:08.877  
00:06:08.877  
00:06:08.877       CUnit - A unit testing framework for C - Version 2.1-3
00:06:08.877       http://cunit.sourceforge.net/
00:06:08.877  
00:06:08.877  
00:06:08.877  Suite: pci
00:06:08.877    Test: pci_hook ...[2024-11-20 14:16:47.844544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58145 has claimed it
00:06:09.136  passed
00:06:09.136  
00:06:09.136  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.136                suites      1      1    n/a      0        0
00:06:09.136                 tests      1      1      1      0        0
00:06:09.136               asserts     25     25     25      0      n/a
00:06:09.136  
00:06:09.136  Elapsed time =    0.006 seconds
00:06:09.136  EAL: Cannot find device (10000:00:01.0)
00:06:09.136  EAL: Failed to attach device on primary process
00:06:09.136  
00:06:09.136  real	0m0.076s
00:06:09.136  user	0m0.041s
00:06:09.136  sys	0m0.034s
00:06:09.136   14:16:47 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:09.136   14:16:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:06:09.136  ************************************
00:06:09.136  END TEST env_pci
00:06:09.136  ************************************
00:06:09.136   14:16:47 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:06:09.136    14:16:47 env -- env/env.sh@15 -- # uname
00:06:09.136   14:16:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:06:09.136   14:16:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:06:09.136   14:16:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:09.136   14:16:47 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:09.136   14:16:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:09.136   14:16:47 env -- common/autotest_common.sh@10 -- # set +x
00:06:09.136  ************************************
00:06:09.136  START TEST env_dpdk_post_init
00:06:09.136  ************************************
00:06:09.136   14:16:47 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:06:09.136  EAL: Detected CPU lcores: 10
00:06:09.136  EAL: Detected NUMA nodes: 1
00:06:09.136  EAL: Detected shared linkage of DPDK
00:06:09.136  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:09.136  EAL: Selected IOVA mode 'PA'
00:06:09.395  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:09.395  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:06:09.395  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1)
00:06:09.395  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1)
00:06:09.395  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1)
00:06:09.395  Starting DPDK initialization...
00:06:09.395  Starting SPDK post initialization...
00:06:09.395  SPDK NVMe probe
00:06:09.395  Attaching to 0000:00:10.0
00:06:09.395  Attaching to 0000:00:11.0
00:06:09.395  Attaching to 0000:00:12.0
00:06:09.395  Attaching to 0000:00:13.0
00:06:09.395  Attached to 0000:00:10.0
00:06:09.395  Attached to 0000:00:11.0
00:06:09.395  Attached to 0000:00:13.0
00:06:09.395  Attached to 0000:00:12.0
00:06:09.395  Cleaning up...
00:06:09.395  
00:06:09.395  real	0m0.345s
00:06:09.395  user	0m0.142s
00:06:09.395  sys	0m0.103s
00:06:09.395   14:16:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:09.395  ************************************
00:06:09.395   14:16:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:06:09.395  END TEST env_dpdk_post_init
00:06:09.395  ************************************
00:06:09.395    14:16:48 env -- env/env.sh@26 -- # uname
00:06:09.395   14:16:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:06:09.395   14:16:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:09.395   14:16:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:09.395   14:16:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:09.395   14:16:48 env -- common/autotest_common.sh@10 -- # set +x
00:06:09.395  ************************************
00:06:09.395  START TEST env_mem_callbacks
00:06:09.395  ************************************
00:06:09.396   14:16:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:06:09.656  EAL: Detected CPU lcores: 10
00:06:09.656  EAL: Detected NUMA nodes: 1
00:06:09.656  EAL: Detected shared linkage of DPDK
00:06:09.656  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:06:09.656  EAL: Selected IOVA mode 'PA'
00:06:09.656  TELEMETRY: No legacy callbacks, legacy socket not created
00:06:09.656  
00:06:09.656  
00:06:09.656       CUnit - A unit testing framework for C - Version 2.1-3
00:06:09.656       http://cunit.sourceforge.net/
00:06:09.656  
00:06:09.656  
00:06:09.656  Suite: memory
00:06:09.656    Test: test ...
00:06:09.656  register 0x200000200000 2097152
00:06:09.656  malloc 3145728
00:06:09.656  register 0x200000400000 4194304
00:06:09.656  buf 0x2000004fffc0 len 3145728 PASSED
00:06:09.656  malloc 64
00:06:09.656  buf 0x2000004ffec0 len 64 PASSED
00:06:09.656  malloc 4194304
00:06:09.656  register 0x200000800000 6291456
00:06:09.656  buf 0x2000009fffc0 len 4194304 PASSED
00:06:09.656  free 0x2000004fffc0 3145728
00:06:09.656  free 0x2000004ffec0 64
00:06:09.656  unregister 0x200000400000 4194304 PASSED
00:06:09.656  free 0x2000009fffc0 4194304
00:06:09.656  unregister 0x200000800000 6291456 PASSED
00:06:09.656  malloc 8388608
00:06:09.656  register 0x200000400000 10485760
00:06:09.656  buf 0x2000005fffc0 len 8388608 PASSED
00:06:09.656  free 0x2000005fffc0 8388608
00:06:09.656  unregister 0x200000400000 10485760 PASSED
00:06:09.656  passed
00:06:09.656  
00:06:09.656  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:06:09.656                suites      1      1    n/a      0        0
00:06:09.656                 tests      1      1      1      0        0
00:06:09.656               asserts     15     15     15      0      n/a
00:06:09.656  
00:06:09.656  Elapsed time =    0.070 seconds
00:06:09.656  
00:06:09.656  real	0m0.275s
00:06:09.656  user	0m0.107s
00:06:09.656  sys	0m0.067s
00:06:09.656   14:16:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:09.656   14:16:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:06:09.656  ************************************
00:06:09.656  END TEST env_mem_callbacks
00:06:09.656  ************************************
00:06:09.914  
00:06:09.914  real	0m8.783s
00:06:09.914  user	0m7.162s
00:06:09.914  sys	0m1.230s
00:06:09.914   14:16:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:09.914   14:16:48 env -- common/autotest_common.sh@10 -- # set +x
00:06:09.914  ************************************
00:06:09.914  END TEST env
00:06:09.914  ************************************
00:06:09.914   14:16:48  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:09.914   14:16:48  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:09.914   14:16:48  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:09.914   14:16:48  -- common/autotest_common.sh@10 -- # set +x
00:06:09.914  ************************************
00:06:09.914  START TEST rpc
00:06:09.914  ************************************
00:06:09.914   14:16:48 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:06:09.914  * Looking for test storage...
00:06:09.914  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:09.914    14:16:48 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:09.914     14:16:48 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:09.914     14:16:48 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:09.914    14:16:48 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:09.914    14:16:48 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:09.914    14:16:48 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:09.914    14:16:48 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:09.914    14:16:48 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:09.914    14:16:48 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:09.914    14:16:48 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:09.914    14:16:48 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:09.915    14:16:48 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:09.915    14:16:48 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:09.915    14:16:48 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:09.915    14:16:48 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:09.915    14:16:48 rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:09.915    14:16:48 rpc -- scripts/common.sh@345 -- # : 1
00:06:09.915    14:16:48 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:09.915    14:16:48 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:09.915     14:16:48 rpc -- scripts/common.sh@365 -- # decimal 1
00:06:09.915     14:16:48 rpc -- scripts/common.sh@353 -- # local d=1
00:06:09.915     14:16:48 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:09.915     14:16:48 rpc -- scripts/common.sh@355 -- # echo 1
00:06:09.915    14:16:48 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:09.915     14:16:48 rpc -- scripts/common.sh@366 -- # decimal 2
00:06:09.915     14:16:48 rpc -- scripts/common.sh@353 -- # local d=2
00:06:09.915     14:16:48 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:09.915     14:16:48 rpc -- scripts/common.sh@355 -- # echo 2
00:06:09.915    14:16:48 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:09.915    14:16:48 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:09.915    14:16:48 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:09.915    14:16:48 rpc -- scripts/common.sh@368 -- # return 0
00:06:09.915    14:16:48 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:09.915    14:16:48 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:09.915  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.915  		--rc genhtml_branch_coverage=1
00:06:09.915  		--rc genhtml_function_coverage=1
00:06:09.915  		--rc genhtml_legend=1
00:06:09.915  		--rc geninfo_all_blocks=1
00:06:09.915  		--rc geninfo_unexecuted_blocks=1
00:06:09.915  		
00:06:09.915  		'
00:06:09.915    14:16:48 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:09.915  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.915  		--rc genhtml_branch_coverage=1
00:06:09.915  		--rc genhtml_function_coverage=1
00:06:09.915  		--rc genhtml_legend=1
00:06:09.915  		--rc geninfo_all_blocks=1
00:06:09.915  		--rc geninfo_unexecuted_blocks=1
00:06:09.915  		
00:06:09.915  		'
00:06:09.915    14:16:48 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:09.915  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.915  		--rc genhtml_branch_coverage=1
00:06:09.915  		--rc genhtml_function_coverage=1
00:06:09.915  		--rc genhtml_legend=1
00:06:09.915  		--rc geninfo_all_blocks=1
00:06:09.915  		--rc geninfo_unexecuted_blocks=1
00:06:09.915  		
00:06:09.915  		'
00:06:09.915    14:16:48 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:09.915  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:09.915  		--rc genhtml_branch_coverage=1
00:06:09.915  		--rc genhtml_function_coverage=1
00:06:09.915  		--rc genhtml_legend=1
00:06:09.915  		--rc geninfo_all_blocks=1
00:06:09.915  		--rc geninfo_unexecuted_blocks=1
00:06:09.915  		
00:06:09.915  		'
00:06:09.915   14:16:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58272
00:06:09.915   14:16:48 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:06:09.915   14:16:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:09.915   14:16:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58272
00:06:09.915   14:16:48 rpc -- common/autotest_common.sh@835 -- # '[' -z 58272 ']'
00:06:09.915   14:16:48 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:09.915   14:16:48 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:09.915  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:09.915   14:16:48 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:09.915   14:16:48 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:09.915   14:16:48 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:10.173  [2024-11-20 14:16:49.016658] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:10.173  [2024-11-20 14:16:49.016844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58272 ]
00:06:10.430  [2024-11-20 14:16:49.207023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:10.430  [2024-11-20 14:16:49.314544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:10.430  [2024-11-20 14:16:49.314631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58272' to capture a snapshot of events at runtime.
00:06:10.430  [2024-11-20 14:16:49.314649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:06:10.430  [2024-11-20 14:16:49.314664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:06:10.430  [2024-11-20 14:16:49.314675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58272 for offline analysis/debug.
00:06:10.430  [2024-11-20 14:16:49.315885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:11.364   14:16:50 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:11.364   14:16:50 rpc -- common/autotest_common.sh@868 -- # return 0
00:06:11.364   14:16:50 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:11.364   14:16:50 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:06:11.364   14:16:50 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:06:11.364   14:16:50 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:06:11.364   14:16:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:11.364   14:16:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:11.364   14:16:50 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:11.364  ************************************
00:06:11.364  START TEST rpc_integrity
00:06:11.364  ************************************
00:06:11.364   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:11.364    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.364   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:11.364    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:11.364   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:11.364    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.364   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:06:11.364    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.364    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.364   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:11.364  {
00:06:11.364  "name": "Malloc0",
00:06:11.364  "aliases": [
00:06:11.364  "82b72da6-6121-4e68-96b9-23d3c885e1fe"
00:06:11.364  ],
00:06:11.364  "product_name": "Malloc disk",
00:06:11.364  "block_size": 512,
00:06:11.364  "num_blocks": 16384,
00:06:11.364  "uuid": "82b72da6-6121-4e68-96b9-23d3c885e1fe",
00:06:11.364  "assigned_rate_limits": {
00:06:11.364  "rw_ios_per_sec": 0,
00:06:11.364  "rw_mbytes_per_sec": 0,
00:06:11.364  "r_mbytes_per_sec": 0,
00:06:11.364  "w_mbytes_per_sec": 0
00:06:11.364  },
00:06:11.364  "claimed": false,
00:06:11.364  "zoned": false,
00:06:11.364  "supported_io_types": {
00:06:11.364  "read": true,
00:06:11.364  "write": true,
00:06:11.364  "unmap": true,
00:06:11.364  "flush": true,
00:06:11.364  "reset": true,
00:06:11.364  "nvme_admin": false,
00:06:11.364  "nvme_io": false,
00:06:11.364  "nvme_io_md": false,
00:06:11.364  "write_zeroes": true,
00:06:11.364  "zcopy": true,
00:06:11.364  "get_zone_info": false,
00:06:11.364  "zone_management": false,
00:06:11.364  "zone_append": false,
00:06:11.364  "compare": false,
00:06:11.364  "compare_and_write": false,
00:06:11.364  "abort": true,
00:06:11.364  "seek_hole": false,
00:06:11.364  "seek_data": false,
00:06:11.364  "copy": true,
00:06:11.364  "nvme_iov_md": false
00:06:11.364  },
00:06:11.364  "memory_domains": [
00:06:11.364  {
00:06:11.364  "dma_device_id": "system",
00:06:11.364  "dma_device_type": 1
00:06:11.364  },
00:06:11.364  {
00:06:11.364  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:11.364  "dma_device_type": 2
00:06:11.364  }
00:06:11.364  ],
00:06:11.364  "driver_specific": {}
00:06:11.364  }
00:06:11.364  ]'
00:06:11.364    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:11.364   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:11.364   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:06:11.364   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.364   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.364  [2024-11-20 14:16:50.275026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:06:11.364  [2024-11-20 14:16:50.275116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:11.364  [2024-11-20 14:16:50.275153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:06:11.364  [2024-11-20 14:16:50.275172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:11.364  [2024-11-20 14:16:50.277968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:11.364  [2024-11-20 14:16:50.278024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:11.364  Passthru0
00:06:11.364   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.365    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:11.365    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.365    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.365    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.365   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:11.365  {
00:06:11.365  "name": "Malloc0",
00:06:11.365  "aliases": [
00:06:11.365  "82b72da6-6121-4e68-96b9-23d3c885e1fe"
00:06:11.365  ],
00:06:11.365  "product_name": "Malloc disk",
00:06:11.365  "block_size": 512,
00:06:11.365  "num_blocks": 16384,
00:06:11.365  "uuid": "82b72da6-6121-4e68-96b9-23d3c885e1fe",
00:06:11.365  "assigned_rate_limits": {
00:06:11.365  "rw_ios_per_sec": 0,
00:06:11.365  "rw_mbytes_per_sec": 0,
00:06:11.365  "r_mbytes_per_sec": 0,
00:06:11.365  "w_mbytes_per_sec": 0
00:06:11.365  },
00:06:11.365  "claimed": true,
00:06:11.365  "claim_type": "exclusive_write",
00:06:11.365  "zoned": false,
00:06:11.365  "supported_io_types": {
00:06:11.365  "read": true,
00:06:11.365  "write": true,
00:06:11.365  "unmap": true,
00:06:11.365  "flush": true,
00:06:11.365  "reset": true,
00:06:11.365  "nvme_admin": false,
00:06:11.365  "nvme_io": false,
00:06:11.365  "nvme_io_md": false,
00:06:11.365  "write_zeroes": true,
00:06:11.365  "zcopy": true,
00:06:11.365  "get_zone_info": false,
00:06:11.365  "zone_management": false,
00:06:11.365  "zone_append": false,
00:06:11.365  "compare": false,
00:06:11.365  "compare_and_write": false,
00:06:11.365  "abort": true,
00:06:11.365  "seek_hole": false,
00:06:11.365  "seek_data": false,
00:06:11.365  "copy": true,
00:06:11.365  "nvme_iov_md": false
00:06:11.365  },
00:06:11.365  "memory_domains": [
00:06:11.365  {
00:06:11.365  "dma_device_id": "system",
00:06:11.365  "dma_device_type": 1
00:06:11.365  },
00:06:11.365  {
00:06:11.365  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:11.365  "dma_device_type": 2
00:06:11.365  }
00:06:11.365  ],
00:06:11.365  "driver_specific": {}
00:06:11.365  },
00:06:11.365  {
00:06:11.365  "name": "Passthru0",
00:06:11.365  "aliases": [
00:06:11.365  "9aa855fe-233b-5647-a7e1-2d448b39996b"
00:06:11.365  ],
00:06:11.365  "product_name": "passthru",
00:06:11.365  "block_size": 512,
00:06:11.365  "num_blocks": 16384,
00:06:11.365  "uuid": "9aa855fe-233b-5647-a7e1-2d448b39996b",
00:06:11.365  "assigned_rate_limits": {
00:06:11.365  "rw_ios_per_sec": 0,
00:06:11.365  "rw_mbytes_per_sec": 0,
00:06:11.365  "r_mbytes_per_sec": 0,
00:06:11.365  "w_mbytes_per_sec": 0
00:06:11.365  },
00:06:11.365  "claimed": false,
00:06:11.365  "zoned": false,
00:06:11.365  "supported_io_types": {
00:06:11.365  "read": true,
00:06:11.365  "write": true,
00:06:11.365  "unmap": true,
00:06:11.365  "flush": true,
00:06:11.365  "reset": true,
00:06:11.365  "nvme_admin": false,
00:06:11.365  "nvme_io": false,
00:06:11.365  "nvme_io_md": false,
00:06:11.365  "write_zeroes": true,
00:06:11.365  "zcopy": true,
00:06:11.365  "get_zone_info": false,
00:06:11.365  "zone_management": false,
00:06:11.365  "zone_append": false,
00:06:11.365  "compare": false,
00:06:11.365  "compare_and_write": false,
00:06:11.365  "abort": true,
00:06:11.365  "seek_hole": false,
00:06:11.365  "seek_data": false,
00:06:11.365  "copy": true,
00:06:11.365  "nvme_iov_md": false
00:06:11.365  },
00:06:11.365  "memory_domains": [
00:06:11.365  {
00:06:11.365  "dma_device_id": "system",
00:06:11.365  "dma_device_type": 1
00:06:11.365  },
00:06:11.365  {
00:06:11.365  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:11.365  "dma_device_type": 2
00:06:11.365  }
00:06:11.365  ],
00:06:11.365  "driver_specific": {
00:06:11.365  "passthru": {
00:06:11.365  "name": "Passthru0",
00:06:11.365  "base_bdev_name": "Malloc0"
00:06:11.365  }
00:06:11.365  }
00:06:11.365  }
00:06:11.365  ]'
00:06:11.365    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:11.623   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:11.623   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:11.623   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.623   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.623   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.623   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:06:11.623   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.624   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.624   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.624    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:11.624    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.624    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.624    14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.624   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:11.624    14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:11.624   14:16:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:11.624  
00:06:11.624  real	0m0.349s
00:06:11.624  user	0m0.228s
00:06:11.624  sys	0m0.034s
00:06:11.624   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:11.624   14:16:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:11.624  ************************************
00:06:11.624  END TEST rpc_integrity
00:06:11.624  ************************************
00:06:11.624   14:16:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:06:11.624   14:16:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:11.624   14:16:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:11.624   14:16:50 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:11.624  ************************************
00:06:11.624  START TEST rpc_plugins
00:06:11.624  ************************************
00:06:11.624   14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:06:11.624    14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:06:11.624    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.624    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:11.624    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.624   14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:06:11.624    14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:06:11.624    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.624    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:11.624    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.624   14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:06:11.624  {
00:06:11.624  "name": "Malloc1",
00:06:11.624  "aliases": [
00:06:11.624  "75ce5e63-535c-48d8-ba79-48f88d47a186"
00:06:11.624  ],
00:06:11.624  "product_name": "Malloc disk",
00:06:11.624  "block_size": 4096,
00:06:11.624  "num_blocks": 256,
00:06:11.624  "uuid": "75ce5e63-535c-48d8-ba79-48f88d47a186",
00:06:11.624  "assigned_rate_limits": {
00:06:11.624  "rw_ios_per_sec": 0,
00:06:11.624  "rw_mbytes_per_sec": 0,
00:06:11.624  "r_mbytes_per_sec": 0,
00:06:11.624  "w_mbytes_per_sec": 0
00:06:11.624  },
00:06:11.624  "claimed": false,
00:06:11.624  "zoned": false,
00:06:11.624  "supported_io_types": {
00:06:11.624  "read": true,
00:06:11.624  "write": true,
00:06:11.624  "unmap": true,
00:06:11.624  "flush": true,
00:06:11.624  "reset": true,
00:06:11.624  "nvme_admin": false,
00:06:11.624  "nvme_io": false,
00:06:11.624  "nvme_io_md": false,
00:06:11.624  "write_zeroes": true,
00:06:11.624  "zcopy": true,
00:06:11.624  "get_zone_info": false,
00:06:11.624  "zone_management": false,
00:06:11.624  "zone_append": false,
00:06:11.624  "compare": false,
00:06:11.624  "compare_and_write": false,
00:06:11.624  "abort": true,
00:06:11.624  "seek_hole": false,
00:06:11.624  "seek_data": false,
00:06:11.624  "copy": true,
00:06:11.624  "nvme_iov_md": false
00:06:11.624  },
00:06:11.624  "memory_domains": [
00:06:11.624  {
00:06:11.624  "dma_device_id": "system",
00:06:11.624  "dma_device_type": 1
00:06:11.624  },
00:06:11.624  {
00:06:11.624  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:11.624  "dma_device_type": 2
00:06:11.624  }
00:06:11.624  ],
00:06:11.624  "driver_specific": {}
00:06:11.624  }
00:06:11.624  ]'
00:06:11.624    14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:06:11.883   14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:06:11.883   14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:06:11.883   14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.883   14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:11.883   14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.883    14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:06:11.883    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.883    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:11.883    14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.883   14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:06:11.883    14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:06:11.883   14:16:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:06:11.883  
00:06:11.883  real	0m0.164s
00:06:11.883  user	0m0.107s
00:06:11.883  sys	0m0.015s
00:06:11.883   14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:11.883   14:16:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:06:11.883  ************************************
00:06:11.883  END TEST rpc_plugins
00:06:11.883  ************************************
00:06:11.883   14:16:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:06:11.883   14:16:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:11.883   14:16:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:11.883   14:16:50 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:11.883  ************************************
00:06:11.883  START TEST rpc_trace_cmd_test
00:06:11.883  ************************************
00:06:11.883   14:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:06:11.883   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:11.883   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:06:11.883  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58272",
00:06:11.883  "tpoint_group_mask": "0x8",
00:06:11.883  "iscsi_conn": {
00:06:11.883  "mask": "0x2",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "scsi": {
00:06:11.883  "mask": "0x4",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "bdev": {
00:06:11.883  "mask": "0x8",
00:06:11.883  "tpoint_mask": "0xffffffffffffffff"
00:06:11.883  },
00:06:11.883  "nvmf_rdma": {
00:06:11.883  "mask": "0x10",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "nvmf_tcp": {
00:06:11.883  "mask": "0x20",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "ftl": {
00:06:11.883  "mask": "0x40",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "blobfs": {
00:06:11.883  "mask": "0x80",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "dsa": {
00:06:11.883  "mask": "0x200",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "thread": {
00:06:11.883  "mask": "0x400",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "nvme_pcie": {
00:06:11.883  "mask": "0x800",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "iaa": {
00:06:11.883  "mask": "0x1000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "nvme_tcp": {
00:06:11.883  "mask": "0x2000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "bdev_nvme": {
00:06:11.883  "mask": "0x4000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "sock": {
00:06:11.883  "mask": "0x8000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "blob": {
00:06:11.883  "mask": "0x10000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "bdev_raid": {
00:06:11.883  "mask": "0x20000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  },
00:06:11.883  "scheduler": {
00:06:11.883  "mask": "0x40000",
00:06:11.883  "tpoint_mask": "0x0"
00:06:11.883  }
00:06:11.883  }'
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:06:11.883   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:06:11.883   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:06:11.883    14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:06:12.142   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:06:12.142    14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:06:12.142   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:06:12.142    14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:06:12.142   14:16:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:06:12.142  
00:06:12.142  real	0m0.261s
00:06:12.142  user	0m0.232s
00:06:12.142  sys	0m0.022s
00:06:12.142   14:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:12.142  ************************************
00:06:12.142  END TEST rpc_trace_cmd_test
00:06:12.142   14:16:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:06:12.142  ************************************
00:06:12.142   14:16:51 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:06:12.142   14:16:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:06:12.142   14:16:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:06:12.142   14:16:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:12.142   14:16:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:12.142   14:16:51 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:12.142  ************************************
00:06:12.142  START TEST rpc_daemon_integrity
00:06:12.142  ************************************
00:06:12.142   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.142   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:06:12.142   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.142    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.401   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:06:12.401    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:12.401    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.401    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.401    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.401   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:06:12.401  {
00:06:12.401  "name": "Malloc2",
00:06:12.401  "aliases": [
00:06:12.401  "29623f4e-05b3-4f0b-955a-cf2373d9ee1f"
00:06:12.401  ],
00:06:12.401  "product_name": "Malloc disk",
00:06:12.401  "block_size": 512,
00:06:12.401  "num_blocks": 16384,
00:06:12.401  "uuid": "29623f4e-05b3-4f0b-955a-cf2373d9ee1f",
00:06:12.401  "assigned_rate_limits": {
00:06:12.401  "rw_ios_per_sec": 0,
00:06:12.401  "rw_mbytes_per_sec": 0,
00:06:12.401  "r_mbytes_per_sec": 0,
00:06:12.401  "w_mbytes_per_sec": 0
00:06:12.401  },
00:06:12.401  "claimed": false,
00:06:12.401  "zoned": false,
00:06:12.401  "supported_io_types": {
00:06:12.401  "read": true,
00:06:12.401  "write": true,
00:06:12.401  "unmap": true,
00:06:12.401  "flush": true,
00:06:12.401  "reset": true,
00:06:12.401  "nvme_admin": false,
00:06:12.401  "nvme_io": false,
00:06:12.401  "nvme_io_md": false,
00:06:12.401  "write_zeroes": true,
00:06:12.401  "zcopy": true,
00:06:12.401  "get_zone_info": false,
00:06:12.401  "zone_management": false,
00:06:12.401  "zone_append": false,
00:06:12.401  "compare": false,
00:06:12.401  "compare_and_write": false,
00:06:12.401  "abort": true,
00:06:12.401  "seek_hole": false,
00:06:12.401  "seek_data": false,
00:06:12.401  "copy": true,
00:06:12.401  "nvme_iov_md": false
00:06:12.401  },
00:06:12.401  "memory_domains": [
00:06:12.401  {
00:06:12.401  "dma_device_id": "system",
00:06:12.401  "dma_device_type": 1
00:06:12.401  },
00:06:12.401  {
00:06:12.401  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:12.401  "dma_device_type": 2
00:06:12.402  }
00:06:12.402  ],
00:06:12.402  "driver_specific": {}
00:06:12.402  }
00:06:12.402  ]'
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.402  [2024-11-20 14:16:51.197249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:06:12.402  [2024-11-20 14:16:51.197327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:12.402  [2024-11-20 14:16:51.197360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680
00:06:12.402  [2024-11-20 14:16:51.197378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:12.402  [2024-11-20 14:16:51.200087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:12.402  [2024-11-20 14:16:51.200140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:12.402  Passthru0
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:06:12.402  {
00:06:12.402  "name": "Malloc2",
00:06:12.402  "aliases": [
00:06:12.402  "29623f4e-05b3-4f0b-955a-cf2373d9ee1f"
00:06:12.402  ],
00:06:12.402  "product_name": "Malloc disk",
00:06:12.402  "block_size": 512,
00:06:12.402  "num_blocks": 16384,
00:06:12.402  "uuid": "29623f4e-05b3-4f0b-955a-cf2373d9ee1f",
00:06:12.402  "assigned_rate_limits": {
00:06:12.402  "rw_ios_per_sec": 0,
00:06:12.402  "rw_mbytes_per_sec": 0,
00:06:12.402  "r_mbytes_per_sec": 0,
00:06:12.402  "w_mbytes_per_sec": 0
00:06:12.402  },
00:06:12.402  "claimed": true,
00:06:12.402  "claim_type": "exclusive_write",
00:06:12.402  "zoned": false,
00:06:12.402  "supported_io_types": {
00:06:12.402  "read": true,
00:06:12.402  "write": true,
00:06:12.402  "unmap": true,
00:06:12.402  "flush": true,
00:06:12.402  "reset": true,
00:06:12.402  "nvme_admin": false,
00:06:12.402  "nvme_io": false,
00:06:12.402  "nvme_io_md": false,
00:06:12.402  "write_zeroes": true,
00:06:12.402  "zcopy": true,
00:06:12.402  "get_zone_info": false,
00:06:12.402  "zone_management": false,
00:06:12.402  "zone_append": false,
00:06:12.402  "compare": false,
00:06:12.402  "compare_and_write": false,
00:06:12.402  "abort": true,
00:06:12.402  "seek_hole": false,
00:06:12.402  "seek_data": false,
00:06:12.402  "copy": true,
00:06:12.402  "nvme_iov_md": false
00:06:12.402  },
00:06:12.402  "memory_domains": [
00:06:12.402  {
00:06:12.402  "dma_device_id": "system",
00:06:12.402  "dma_device_type": 1
00:06:12.402  },
00:06:12.402  {
00:06:12.402  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:12.402  "dma_device_type": 2
00:06:12.402  }
00:06:12.402  ],
00:06:12.402  "driver_specific": {}
00:06:12.402  },
00:06:12.402  {
00:06:12.402  "name": "Passthru0",
00:06:12.402  "aliases": [
00:06:12.402  "76427bcf-09ca-5752-a1ea-c035679910f3"
00:06:12.402  ],
00:06:12.402  "product_name": "passthru",
00:06:12.402  "block_size": 512,
00:06:12.402  "num_blocks": 16384,
00:06:12.402  "uuid": "76427bcf-09ca-5752-a1ea-c035679910f3",
00:06:12.402  "assigned_rate_limits": {
00:06:12.402  "rw_ios_per_sec": 0,
00:06:12.402  "rw_mbytes_per_sec": 0,
00:06:12.402  "r_mbytes_per_sec": 0,
00:06:12.402  "w_mbytes_per_sec": 0
00:06:12.402  },
00:06:12.402  "claimed": false,
00:06:12.402  "zoned": false,
00:06:12.402  "supported_io_types": {
00:06:12.402  "read": true,
00:06:12.402  "write": true,
00:06:12.402  "unmap": true,
00:06:12.402  "flush": true,
00:06:12.402  "reset": true,
00:06:12.402  "nvme_admin": false,
00:06:12.402  "nvme_io": false,
00:06:12.402  "nvme_io_md": false,
00:06:12.402  "write_zeroes": true,
00:06:12.402  "zcopy": true,
00:06:12.402  "get_zone_info": false,
00:06:12.402  "zone_management": false,
00:06:12.402  "zone_append": false,
00:06:12.402  "compare": false,
00:06:12.402  "compare_and_write": false,
00:06:12.402  "abort": true,
00:06:12.402  "seek_hole": false,
00:06:12.402  "seek_data": false,
00:06:12.402  "copy": true,
00:06:12.402  "nvme_iov_md": false
00:06:12.402  },
00:06:12.402  "memory_domains": [
00:06:12.402  {
00:06:12.402  "dma_device_id": "system",
00:06:12.402  "dma_device_type": 1
00:06:12.402  },
00:06:12.402  {
00:06:12.402  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:12.402  "dma_device_type": 2
00:06:12.402  }
00:06:12.402  ],
00:06:12.402  "driver_specific": {
00:06:12.402  "passthru": {
00:06:12.402  "name": "Passthru0",
00:06:12.402  "base_bdev_name": "Malloc2"
00:06:12.402  }
00:06:12.402  }
00:06:12.402  }
00:06:12.402  ]'
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:12.402    14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:12.402  
00:06:12.402  real	0m0.316s
00:06:12.402  user	0m0.190s
00:06:12.402  sys	0m0.039s
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:12.402  ************************************
00:06:12.402   14:16:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:06:12.402  END TEST rpc_daemon_integrity
00:06:12.402  ************************************
00:06:12.660   14:16:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:06:12.660   14:16:51 rpc -- rpc/rpc.sh@84 -- # killprocess 58272
00:06:12.660   14:16:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 58272 ']'
00:06:12.660   14:16:51 rpc -- common/autotest_common.sh@958 -- # kill -0 58272
00:06:12.660    14:16:51 rpc -- common/autotest_common.sh@959 -- # uname
00:06:12.660   14:16:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:12.660    14:16:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58272
00:06:12.661   14:16:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:12.661   14:16:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:12.661  killing process with pid 58272
00:06:12.661   14:16:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58272'
00:06:12.661   14:16:51 rpc -- common/autotest_common.sh@973 -- # kill 58272
00:06:12.661   14:16:51 rpc -- common/autotest_common.sh@978 -- # wait 58272
00:06:14.563  
00:06:14.563  real	0m4.815s
00:06:14.563  user	0m5.651s
00:06:14.563  sys	0m0.751s
00:06:14.563   14:16:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:14.563   14:16:53 rpc -- common/autotest_common.sh@10 -- # set +x
00:06:14.563  ************************************
00:06:14.563  END TEST rpc
00:06:14.563  ************************************
00:06:14.822   14:16:53  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:14.822   14:16:53  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:14.822   14:16:53  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:14.822   14:16:53  -- common/autotest_common.sh@10 -- # set +x
00:06:14.822  ************************************
00:06:14.822  START TEST skip_rpc
00:06:14.822  ************************************
00:06:14.822   14:16:53 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:06:14.822  * Looking for test storage...
00:06:14.822  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:06:14.822    14:16:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:14.822     14:16:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:14.822     14:16:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:14.822    14:16:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@345 -- # : 1
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:14.822     14:16:53 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:14.822    14:16:53 skip_rpc -- scripts/common.sh@368 -- # return 0
00:06:14.822    14:16:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:14.822    14:16:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:14.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:14.822  		--rc genhtml_branch_coverage=1
00:06:14.822  		--rc genhtml_function_coverage=1
00:06:14.822  		--rc genhtml_legend=1
00:06:14.822  		--rc geninfo_all_blocks=1
00:06:14.822  		--rc geninfo_unexecuted_blocks=1
00:06:14.822  		
00:06:14.822  		'
00:06:14.822    14:16:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:14.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:14.823  		--rc genhtml_branch_coverage=1
00:06:14.823  		--rc genhtml_function_coverage=1
00:06:14.823  		--rc genhtml_legend=1
00:06:14.823  		--rc geninfo_all_blocks=1
00:06:14.823  		--rc geninfo_unexecuted_blocks=1
00:06:14.823  		
00:06:14.823  		'
00:06:14.823    14:16:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:14.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:14.823  		--rc genhtml_branch_coverage=1
00:06:14.823  		--rc genhtml_function_coverage=1
00:06:14.823  		--rc genhtml_legend=1
00:06:14.823  		--rc geninfo_all_blocks=1
00:06:14.823  		--rc geninfo_unexecuted_blocks=1
00:06:14.823  		
00:06:14.823  		'
00:06:14.823    14:16:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:14.823  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:14.823  		--rc genhtml_branch_coverage=1
00:06:14.823  		--rc genhtml_function_coverage=1
00:06:14.823  		--rc genhtml_legend=1
00:06:14.823  		--rc geninfo_all_blocks=1
00:06:14.823  		--rc geninfo_unexecuted_blocks=1
00:06:14.823  		
00:06:14.823  		'
00:06:14.823   14:16:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:14.823   14:16:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:14.823   14:16:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:06:14.823   14:16:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:14.823   14:16:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:14.823   14:16:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:14.823  ************************************
00:06:14.823  START TEST skip_rpc
00:06:14.823  ************************************
00:06:14.823   14:16:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:06:14.823   14:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58501
00:06:14.823   14:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:14.823   14:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:06:14.823   14:16:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:06:15.091  [2024-11-20 14:16:53.871229] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:15.091  [2024-11-20 14:16:53.871616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58501 ]
00:06:15.091  [2024-11-20 14:16:54.054214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:15.349  [2024-11-20 14:16:54.161502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:20.612    14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58501
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58501 ']'
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58501
00:06:20.612    14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:20.612    14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58501
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58501'
00:06:20.612  killing process with pid 58501
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58501
00:06:20.612   14:16:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58501
00:06:21.986  
00:06:21.986  real	0m7.133s
00:06:21.986  user	0m6.708s
00:06:21.986  sys	0m0.320s
00:06:21.986   14:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:21.986  ************************************
00:06:21.986  END TEST skip_rpc
00:06:21.986  ************************************
00:06:21.986   14:17:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:21.986   14:17:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:06:21.986   14:17:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:21.986   14:17:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:21.986   14:17:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:21.986  ************************************
00:06:21.986  START TEST skip_rpc_with_json
00:06:21.986  ************************************
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58604
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58604
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58604 ']'
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:21.986  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:21.986   14:17:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:22.244  [2024-11-20 14:17:01.055117] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:22.244  [2024-11-20 14:17:01.055314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58604 ]
00:06:22.500  [2024-11-20 14:17:01.239537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:22.500  [2024-11-20 14:17:01.346226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:23.433  [2024-11-20 14:17:02.116738] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:06:23.433  request:
00:06:23.433  {
00:06:23.433  "trtype": "tcp",
00:06:23.433  "method": "nvmf_get_transports",
00:06:23.433  "req_id": 1
00:06:23.433  }
00:06:23.433  Got JSON-RPC error response
00:06:23.433  response:
00:06:23.433  {
00:06:23.433  "code": -19,
00:06:23.433  "message": "No such device"
00:06:23.433  }
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:23.433  [2024-11-20 14:17:02.128875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:23.433   14:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:23.433  {
00:06:23.433  "subsystems": [
00:06:23.433  {
00:06:23.433  "subsystem": "fsdev",
00:06:23.433  "config": [
00:06:23.433  {
00:06:23.433  "method": "fsdev_set_opts",
00:06:23.433  "params": {
00:06:23.433  "fsdev_io_pool_size": 65535,
00:06:23.433  "fsdev_io_cache_size": 256
00:06:23.433  }
00:06:23.433  }
00:06:23.433  ]
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "subsystem": "keyring",
00:06:23.433  "config": []
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "subsystem": "iobuf",
00:06:23.433  "config": [
00:06:23.433  {
00:06:23.433  "method": "iobuf_set_options",
00:06:23.433  "params": {
00:06:23.433  "small_pool_count": 8192,
00:06:23.433  "large_pool_count": 1024,
00:06:23.433  "small_bufsize": 8192,
00:06:23.433  "large_bufsize": 135168,
00:06:23.433  "enable_numa": false
00:06:23.433  }
00:06:23.433  }
00:06:23.433  ]
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "subsystem": "sock",
00:06:23.433  "config": [
00:06:23.433  {
00:06:23.433  "method": "sock_set_default_impl",
00:06:23.433  "params": {
00:06:23.433  "impl_name": "posix"
00:06:23.433  }
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "method": "sock_impl_set_options",
00:06:23.433  "params": {
00:06:23.433  "impl_name": "ssl",
00:06:23.433  "recv_buf_size": 4096,
00:06:23.433  "send_buf_size": 4096,
00:06:23.433  "enable_recv_pipe": true,
00:06:23.433  "enable_quickack": false,
00:06:23.433  "enable_placement_id": 0,
00:06:23.433  "enable_zerocopy_send_server": true,
00:06:23.433  "enable_zerocopy_send_client": false,
00:06:23.433  "zerocopy_threshold": 0,
00:06:23.433  "tls_version": 0,
00:06:23.433  "enable_ktls": false
00:06:23.433  }
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "method": "sock_impl_set_options",
00:06:23.433  "params": {
00:06:23.433  "impl_name": "posix",
00:06:23.433  "recv_buf_size": 2097152,
00:06:23.433  "send_buf_size": 2097152,
00:06:23.433  "enable_recv_pipe": true,
00:06:23.433  "enable_quickack": false,
00:06:23.433  "enable_placement_id": 0,
00:06:23.433  "enable_zerocopy_send_server": true,
00:06:23.433  "enable_zerocopy_send_client": false,
00:06:23.433  "zerocopy_threshold": 0,
00:06:23.433  "tls_version": 0,
00:06:23.433  "enable_ktls": false
00:06:23.433  }
00:06:23.433  }
00:06:23.433  ]
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "subsystem": "vmd",
00:06:23.433  "config": []
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "subsystem": "accel",
00:06:23.433  "config": [
00:06:23.433  {
00:06:23.433  "method": "accel_set_options",
00:06:23.433  "params": {
00:06:23.433  "small_cache_size": 128,
00:06:23.433  "large_cache_size": 16,
00:06:23.433  "task_count": 2048,
00:06:23.433  "sequence_count": 2048,
00:06:23.433  "buf_count": 2048
00:06:23.433  }
00:06:23.433  }
00:06:23.433  ]
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "subsystem": "bdev",
00:06:23.433  "config": [
00:06:23.433  {
00:06:23.433  "method": "bdev_set_options",
00:06:23.433  "params": {
00:06:23.433  "bdev_io_pool_size": 65535,
00:06:23.433  "bdev_io_cache_size": 256,
00:06:23.433  "bdev_auto_examine": true,
00:06:23.433  "iobuf_small_cache_size": 128,
00:06:23.433  "iobuf_large_cache_size": 16
00:06:23.433  }
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "method": "bdev_raid_set_options",
00:06:23.433  "params": {
00:06:23.433  "process_window_size_kb": 1024,
00:06:23.433  "process_max_bandwidth_mb_sec": 0
00:06:23.433  }
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "method": "bdev_iscsi_set_options",
00:06:23.433  "params": {
00:06:23.433  "timeout_sec": 30
00:06:23.433  }
00:06:23.433  },
00:06:23.433  {
00:06:23.433  "method": "bdev_nvme_set_options",
00:06:23.433  "params": {
00:06:23.433  "action_on_timeout": "none",
00:06:23.433  "timeout_us": 0,
00:06:23.433  "timeout_admin_us": 0,
00:06:23.433  "keep_alive_timeout_ms": 10000,
00:06:23.433  "arbitration_burst": 0,
00:06:23.433  "low_priority_weight": 0,
00:06:23.433  "medium_priority_weight": 0,
00:06:23.433  "high_priority_weight": 0,
00:06:23.433  "nvme_adminq_poll_period_us": 10000,
00:06:23.433  "nvme_ioq_poll_period_us": 0,
00:06:23.433  "io_queue_requests": 0,
00:06:23.433  "delay_cmd_submit": true,
00:06:23.433  "transport_retry_count": 4,
00:06:23.433  "bdev_retry_count": 3,
00:06:23.433  "transport_ack_timeout": 0,
00:06:23.433  "ctrlr_loss_timeout_sec": 0,
00:06:23.433  "reconnect_delay_sec": 0,
00:06:23.433  "fast_io_fail_timeout_sec": 0,
00:06:23.433  "disable_auto_failback": false,
00:06:23.433  "generate_uuids": false,
00:06:23.433  "transport_tos": 0,
00:06:23.433  "nvme_error_stat": false,
00:06:23.433  "rdma_srq_size": 0,
00:06:23.433  "io_path_stat": false,
00:06:23.433  "allow_accel_sequence": false,
00:06:23.433  "rdma_max_cq_size": 0,
00:06:23.434  "rdma_cm_event_timeout_ms": 0,
00:06:23.434  "dhchap_digests": [
00:06:23.434  "sha256",
00:06:23.434  "sha384",
00:06:23.434  "sha512"
00:06:23.434  ],
00:06:23.434  "dhchap_dhgroups": [
00:06:23.434  "null",
00:06:23.434  "ffdhe2048",
00:06:23.434  "ffdhe3072",
00:06:23.434  "ffdhe4096",
00:06:23.434  "ffdhe6144",
00:06:23.434  "ffdhe8192"
00:06:23.434  ]
00:06:23.434  }
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "method": "bdev_nvme_set_hotplug",
00:06:23.434  "params": {
00:06:23.434  "period_us": 100000,
00:06:23.434  "enable": false
00:06:23.434  }
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "method": "bdev_wait_for_examine"
00:06:23.434  }
00:06:23.434  ]
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "scsi",
00:06:23.434  "config": null
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "scheduler",
00:06:23.434  "config": [
00:06:23.434  {
00:06:23.434  "method": "framework_set_scheduler",
00:06:23.434  "params": {
00:06:23.434  "name": "static"
00:06:23.434  }
00:06:23.434  }
00:06:23.434  ]
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "vhost_scsi",
00:06:23.434  "config": []
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "vhost_blk",
00:06:23.434  "config": []
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "ublk",
00:06:23.434  "config": []
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "nbd",
00:06:23.434  "config": []
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "nvmf",
00:06:23.434  "config": [
00:06:23.434  {
00:06:23.434  "method": "nvmf_set_config",
00:06:23.434  "params": {
00:06:23.434  "discovery_filter": "match_any",
00:06:23.434  "admin_cmd_passthru": {
00:06:23.434  "identify_ctrlr": false
00:06:23.434  },
00:06:23.434  "dhchap_digests": [
00:06:23.434  "sha256",
00:06:23.434  "sha384",
00:06:23.434  "sha512"
00:06:23.434  ],
00:06:23.434  "dhchap_dhgroups": [
00:06:23.434  "null",
00:06:23.434  "ffdhe2048",
00:06:23.434  "ffdhe3072",
00:06:23.434  "ffdhe4096",
00:06:23.434  "ffdhe6144",
00:06:23.434  "ffdhe8192"
00:06:23.434  ]
00:06:23.434  }
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "method": "nvmf_set_max_subsystems",
00:06:23.434  "params": {
00:06:23.434  "max_subsystems": 1024
00:06:23.434  }
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "method": "nvmf_set_crdt",
00:06:23.434  "params": {
00:06:23.434  "crdt1": 0,
00:06:23.434  "crdt2": 0,
00:06:23.434  "crdt3": 0
00:06:23.434  }
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "method": "nvmf_create_transport",
00:06:23.434  "params": {
00:06:23.434  "trtype": "TCP",
00:06:23.434  "max_queue_depth": 128,
00:06:23.434  "max_io_qpairs_per_ctrlr": 127,
00:06:23.434  "in_capsule_data_size": 4096,
00:06:23.434  "max_io_size": 131072,
00:06:23.434  "io_unit_size": 131072,
00:06:23.434  "max_aq_depth": 128,
00:06:23.434  "num_shared_buffers": 511,
00:06:23.434  "buf_cache_size": 4294967295,
00:06:23.434  "dif_insert_or_strip": false,
00:06:23.434  "zcopy": false,
00:06:23.434  "c2h_success": true,
00:06:23.434  "sock_priority": 0,
00:06:23.434  "abort_timeout_sec": 1,
00:06:23.434  "ack_timeout": 0,
00:06:23.434  "data_wr_pool_size": 0
00:06:23.434  }
00:06:23.434  }
00:06:23.434  ]
00:06:23.434  },
00:06:23.434  {
00:06:23.434  "subsystem": "iscsi",
00:06:23.434  "config": [
00:06:23.434  {
00:06:23.434  "method": "iscsi_set_options",
00:06:23.434  "params": {
00:06:23.434  "node_base": "iqn.2016-06.io.spdk",
00:06:23.434  "max_sessions": 128,
00:06:23.434  "max_connections_per_session": 2,
00:06:23.434  "max_queue_depth": 64,
00:06:23.434  "default_time2wait": 2,
00:06:23.434  "default_time2retain": 20,
00:06:23.434  "first_burst_length": 8192,
00:06:23.434  "immediate_data": true,
00:06:23.434  "allow_duplicated_isid": false,
00:06:23.434  "error_recovery_level": 0,
00:06:23.434  "nop_timeout": 60,
00:06:23.434  "nop_in_interval": 30,
00:06:23.434  "disable_chap": false,
00:06:23.434  "require_chap": false,
00:06:23.434  "mutual_chap": false,
00:06:23.434  "chap_group": 0,
00:06:23.434  "max_large_datain_per_connection": 64,
00:06:23.434  "max_r2t_per_connection": 4,
00:06:23.434  "pdu_pool_size": 36864,
00:06:23.434  "immediate_data_pool_size": 16384,
00:06:23.434  "data_out_pool_size": 2048
00:06:23.434  }
00:06:23.434  }
00:06:23.434  ]
00:06:23.434  }
00:06:23.434  ]
00:06:23.434  }
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58604
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58604 ']'
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58604
00:06:23.434    14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:23.434    14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58604
00:06:23.434  killing process with pid 58604
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58604'
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58604
00:06:23.434   14:17:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58604
00:06:25.964   14:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58656
00:06:25.964   14:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:25.964   14:17:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:06:31.322   14:17:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58656
00:06:31.322   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58656 ']'
00:06:31.322   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58656
00:06:31.322    14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:06:31.322   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:31.322    14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58656
00:06:31.322  killing process with pid 58656
00:06:31.322   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:31.322   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:31.323   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58656'
00:06:31.323   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58656
00:06:31.323   14:17:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58656
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:06:32.699  ************************************
00:06:32.699  END TEST skip_rpc_with_json
00:06:32.699  ************************************
00:06:32.699  
00:06:32.699  real	0m10.643s
00:06:32.699  user	0m10.260s
00:06:32.699  sys	0m0.749s
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:06:32.699   14:17:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:06:32.699   14:17:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:32.699   14:17:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:32.699   14:17:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:32.699  ************************************
00:06:32.699  START TEST skip_rpc_with_delay
00:06:32.699  ************************************
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:32.699    14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:32.699   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:32.699    14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:32.700   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:32.700   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:32.700   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:06:32.700   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:06:32.958  [2024-11-20 14:17:11.790104] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:06:32.958   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:06:32.958   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:32.958   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:06:32.958   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:32.958  
00:06:32.958  real	0m0.236s
00:06:32.958  user	0m0.136s
00:06:32.958  sys	0m0.097s
00:06:32.958   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:32.958  ************************************
00:06:32.958  END TEST skip_rpc_with_delay
00:06:32.958  ************************************
00:06:32.958   14:17:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:06:32.958    14:17:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:06:32.958   14:17:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:06:32.958   14:17:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:06:32.958   14:17:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:32.958   14:17:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:32.958   14:17:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:32.958  ************************************
00:06:32.958  START TEST exit_on_failed_rpc_init
00:06:32.958  ************************************
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58784
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58784
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58784 ']'
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:32.958  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:32.958   14:17:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:33.217  [2024-11-20 14:17:12.076275] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:33.217  [2024-11-20 14:17:12.076466] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58784 ]
00:06:33.475  [2024-11-20 14:17:12.258614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:33.475  [2024-11-20 14:17:12.368304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:34.411    14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:34.411    14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:06:34.411   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:06:34.411  [2024-11-20 14:17:13.262514] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:34.411  [2024-11-20 14:17:13.262702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58802 ]
00:06:34.670  [2024-11-20 14:17:13.437883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:34.670  [2024-11-20 14:17:13.541915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:34.670  [2024-11-20 14:17:13.542032] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:06:34.670  [2024-11-20 14:17:13.542061] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:06:34.670  [2024-11-20 14:17:13.542084] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58784
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58784 ']'
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58784
00:06:34.930    14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:34.930    14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58784
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:34.930  killing process with pid 58784
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58784'
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58784
00:06:34.930   14:17:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58784
00:06:37.464  
00:06:37.464  real	0m4.004s
00:06:37.464  user	0m4.575s
00:06:37.464  sys	0m0.516s
00:06:37.464  ************************************
00:06:37.464  END TEST exit_on_failed_rpc_init
00:06:37.464  ************************************
00:06:37.464   14:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.464   14:17:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:06:37.464   14:17:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:06:37.464  
00:06:37.464  real	0m22.395s
00:06:37.464  user	0m21.851s
00:06:37.464  sys	0m1.884s
00:06:37.464   14:17:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.464  ************************************
00:06:37.464  END TEST skip_rpc
00:06:37.464  ************************************
00:06:37.464   14:17:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:37.464   14:17:16  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:06:37.464   14:17:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:37.464   14:17:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.464   14:17:16  -- common/autotest_common.sh@10 -- # set +x
00:06:37.464  ************************************
00:06:37.464  START TEST rpc_client
00:06:37.464  ************************************
00:06:37.464   14:17:16 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:06:37.464  * Looking for test storage...
00:06:37.464  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:37.464     14:17:16 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:06:37.464     14:17:16 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@345 -- # : 1
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@353 -- # local d=1
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@355 -- # echo 1
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@353 -- # local d=2
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:37.464     14:17:16 rpc_client -- scripts/common.sh@355 -- # echo 2
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:37.464    14:17:16 rpc_client -- scripts/common.sh@368 -- # return 0
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:37.464  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.464  		--rc genhtml_branch_coverage=1
00:06:37.464  		--rc genhtml_function_coverage=1
00:06:37.464  		--rc genhtml_legend=1
00:06:37.464  		--rc geninfo_all_blocks=1
00:06:37.464  		--rc geninfo_unexecuted_blocks=1
00:06:37.464  		
00:06:37.464  		'
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:37.464  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.464  		--rc genhtml_branch_coverage=1
00:06:37.464  		--rc genhtml_function_coverage=1
00:06:37.464  		--rc genhtml_legend=1
00:06:37.464  		--rc geninfo_all_blocks=1
00:06:37.464  		--rc geninfo_unexecuted_blocks=1
00:06:37.464  		
00:06:37.464  		'
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:37.464  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.464  		--rc genhtml_branch_coverage=1
00:06:37.464  		--rc genhtml_function_coverage=1
00:06:37.464  		--rc genhtml_legend=1
00:06:37.464  		--rc geninfo_all_blocks=1
00:06:37.464  		--rc geninfo_unexecuted_blocks=1
00:06:37.464  		
00:06:37.464  		'
00:06:37.464    14:17:16 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:37.464  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.464  		--rc genhtml_branch_coverage=1
00:06:37.464  		--rc genhtml_function_coverage=1
00:06:37.464  		--rc genhtml_legend=1
00:06:37.464  		--rc geninfo_all_blocks=1
00:06:37.464  		--rc geninfo_unexecuted_blocks=1
00:06:37.464  		
00:06:37.464  		'
00:06:37.464   14:17:16 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:06:37.464  OK
00:06:37.464   14:17:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:06:37.464  
00:06:37.464  real	0m0.239s
00:06:37.464  user	0m0.134s
00:06:37.464  sys	0m0.117s
00:06:37.464  ************************************
00:06:37.464  END TEST rpc_client
00:06:37.464  ************************************
00:06:37.464   14:17:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.464   14:17:16 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:06:37.465   14:17:16  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:06:37.465   14:17:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:37.465   14:17:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.465   14:17:16  -- common/autotest_common.sh@10 -- # set +x
00:06:37.465  ************************************
00:06:37.465  START TEST json_config
00:06:37.465  ************************************
00:06:37.465   14:17:16 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:06:37.465    14:17:16 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:37.465     14:17:16 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:06:37.465     14:17:16 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:37.725    14:17:16 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:37.725    14:17:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:37.725    14:17:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:37.725    14:17:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:37.725    14:17:16 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:06:37.725    14:17:16 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:06:37.725    14:17:16 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:06:37.725    14:17:16 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:06:37.725    14:17:16 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:06:37.725    14:17:16 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:06:37.725    14:17:16 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:06:37.725    14:17:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:37.725    14:17:16 json_config -- scripts/common.sh@344 -- # case "$op" in
00:06:37.725    14:17:16 json_config -- scripts/common.sh@345 -- # : 1
00:06:37.725    14:17:16 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:37.725    14:17:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:37.725     14:17:16 json_config -- scripts/common.sh@365 -- # decimal 1
00:06:37.725     14:17:16 json_config -- scripts/common.sh@353 -- # local d=1
00:06:37.725     14:17:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:37.725     14:17:16 json_config -- scripts/common.sh@355 -- # echo 1
00:06:37.725    14:17:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:06:37.725     14:17:16 json_config -- scripts/common.sh@366 -- # decimal 2
00:06:37.725     14:17:16 json_config -- scripts/common.sh@353 -- # local d=2
00:06:37.725     14:17:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:37.725     14:17:16 json_config -- scripts/common.sh@355 -- # echo 2
00:06:37.725    14:17:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:06:37.725    14:17:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:37.725    14:17:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:37.725    14:17:16 json_config -- scripts/common.sh@368 -- # return 0
00:06:37.725    14:17:16 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:37.725    14:17:16 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:37.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.725  		--rc genhtml_branch_coverage=1
00:06:37.725  		--rc genhtml_function_coverage=1
00:06:37.725  		--rc genhtml_legend=1
00:06:37.725  		--rc geninfo_all_blocks=1
00:06:37.725  		--rc geninfo_unexecuted_blocks=1
00:06:37.725  		
00:06:37.725  		'
00:06:37.725    14:17:16 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:37.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.725  		--rc genhtml_branch_coverage=1
00:06:37.725  		--rc genhtml_function_coverage=1
00:06:37.725  		--rc genhtml_legend=1
00:06:37.725  		--rc geninfo_all_blocks=1
00:06:37.725  		--rc geninfo_unexecuted_blocks=1
00:06:37.725  		
00:06:37.725  		'
00:06:37.725    14:17:16 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:37.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.725  		--rc genhtml_branch_coverage=1
00:06:37.725  		--rc genhtml_function_coverage=1
00:06:37.725  		--rc genhtml_legend=1
00:06:37.725  		--rc geninfo_all_blocks=1
00:06:37.725  		--rc geninfo_unexecuted_blocks=1
00:06:37.725  		
00:06:37.725  		'
00:06:37.725    14:17:16 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:37.725  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:37.725  		--rc genhtml_branch_coverage=1
00:06:37.725  		--rc genhtml_function_coverage=1
00:06:37.725  		--rc genhtml_legend=1
00:06:37.725  		--rc geninfo_all_blocks=1
00:06:37.725  		--rc geninfo_unexecuted_blocks=1
00:06:37.725  		
00:06:37.725  		'
00:06:37.725   14:17:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:37.725     14:17:16 json_config -- nvmf/common.sh@7 -- # uname -s
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:37.725     14:17:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d23fef63-b4ba-422a-867d-7e27affacb1a
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=d23fef63-b4ba-422a-867d-7e27affacb1a
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:37.725    14:17:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:37.725     14:17:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:06:37.725     14:17:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:37.725     14:17:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:37.725     14:17:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:37.725      14:17:16 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:37.726      14:17:16 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:37.726      14:17:16 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:37.726      14:17:16 json_config -- paths/export.sh@5 -- # export PATH
00:06:37.726      14:17:16 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@51 -- # : 0
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:37.726  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:37.726    14:17:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:06:37.726  WARNING: No tests are enabled so not running JSON configuration tests
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:06:37.726   14:17:16 json_config -- json_config/json_config.sh@28 -- # exit 0
00:06:37.726  
00:06:37.726  real	0m0.193s
00:06:37.726  user	0m0.135s
00:06:37.726  sys	0m0.065s
00:06:37.726   14:17:16 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:37.726   14:17:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:06:37.726  ************************************
00:06:37.726  END TEST json_config
00:06:37.726  ************************************
00:06:37.726   14:17:16  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:06:37.726   14:17:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:37.726   14:17:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:37.726   14:17:16  -- common/autotest_common.sh@10 -- # set +x
00:06:37.726  ************************************
00:06:37.726  START TEST json_config_extra_key
00:06:37.726  ************************************
00:06:37.726   14:17:16 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:06:37.726    14:17:16 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:37.726     14:17:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:06:37.726     14:17:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:37.726    14:17:16 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:37.726     14:17:16 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:06:37.726     14:17:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:06:37.726     14:17:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:37.726     14:17:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:06:37.726    14:17:16 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:06:37.726     14:17:16 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:06:38.032    14:17:16 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:06:38.032    14:17:16 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:38.032    14:17:16 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:38.032    14:17:16 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:06:38.032    14:17:16 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:38.032    14:17:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:38.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.032  		--rc genhtml_branch_coverage=1
00:06:38.032  		--rc genhtml_function_coverage=1
00:06:38.032  		--rc genhtml_legend=1
00:06:38.032  		--rc geninfo_all_blocks=1
00:06:38.032  		--rc geninfo_unexecuted_blocks=1
00:06:38.032  		
00:06:38.032  		'
00:06:38.032    14:17:16 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:38.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.032  		--rc genhtml_branch_coverage=1
00:06:38.032  		--rc genhtml_function_coverage=1
00:06:38.032  		--rc genhtml_legend=1
00:06:38.032  		--rc geninfo_all_blocks=1
00:06:38.032  		--rc geninfo_unexecuted_blocks=1
00:06:38.032  		
00:06:38.032  		'
00:06:38.032    14:17:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:38.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.032  		--rc genhtml_branch_coverage=1
00:06:38.032  		--rc genhtml_function_coverage=1
00:06:38.032  		--rc genhtml_legend=1
00:06:38.032  		--rc geninfo_all_blocks=1
00:06:38.032  		--rc geninfo_unexecuted_blocks=1
00:06:38.032  		
00:06:38.032  		'
00:06:38.032    14:17:16 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:38.032  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:38.032  		--rc genhtml_branch_coverage=1
00:06:38.032  		--rc genhtml_function_coverage=1
00:06:38.032  		--rc genhtml_legend=1
00:06:38.032  		--rc geninfo_all_blocks=1
00:06:38.032  		--rc geninfo_unexecuted_blocks=1
00:06:38.032  		
00:06:38.032  		'
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:06:38.032     14:17:16 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:38.032     14:17:16 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:d23fef63-b4ba-422a-867d-7e27affacb1a
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=d23fef63-b4ba-422a-867d-7e27affacb1a
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:38.032     14:17:16 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:38.032      14:17:16 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:38.032      14:17:16 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:38.032      14:17:16 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:38.032      14:17:16 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:06:38.032      14:17:16 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:06:38.032  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:06:38.032    14:17:16 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:06:38.032  INFO: launching applications...
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:06:38.032   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:06:38.033   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:06:38.033   14:17:16 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59012
00:06:38.033  Waiting for target to run...
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59012 /var/tmp/spdk_tgt.sock
00:06:38.033   14:17:16 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:06:38.033   14:17:16 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59012 ']'
00:06:38.033   14:17:16 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:06:38.033  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:06:38.033   14:17:16 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:38.033   14:17:16 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:06:38.033   14:17:16 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:38.033   14:17:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:06:38.033  [2024-11-20 14:17:16.851053] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:38.033  [2024-11-20 14:17:16.851230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59012 ]
00:06:38.293  [2024-11-20 14:17:17.206973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:38.552  [2024-11-20 14:17:17.324603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:39.119   14:17:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:39.119  
00:06:39.119   14:17:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:06:39.119  INFO: shutting down applications...
00:06:39.119   14:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:06:39.119   14:17:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59012 ]]
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59012
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59012
00:06:39.119   14:17:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:39.686   14:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:39.686   14:17:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:39.686   14:17:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59012
00:06:39.686   14:17:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:40.253   14:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:40.253   14:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:40.253   14:17:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59012
00:06:40.253   14:17:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:40.819   14:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:40.819   14:17:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:40.819   14:17:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59012
00:06:40.819   14:17:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:41.078   14:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:41.078   14:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:41.078   14:17:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59012
00:06:41.078   14:17:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59012
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@43 -- # break
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:06:41.645  SPDK target shutdown done
00:06:41.645   14:17:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:06:41.645  Success
00:06:41.645   14:17:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:06:41.645  
00:06:41.645  real	0m4.004s
00:06:41.645  user	0m3.874s
00:06:41.645  sys	0m0.480s
00:06:41.645   14:17:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:41.645   14:17:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:06:41.645  ************************************
00:06:41.645  END TEST json_config_extra_key
00:06:41.645  ************************************
00:06:41.645   14:17:20  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:06:41.646   14:17:20  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:41.646   14:17:20  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:41.646   14:17:20  -- common/autotest_common.sh@10 -- # set +x
00:06:41.646  ************************************
00:06:41.646  START TEST alias_rpc
00:06:41.646  ************************************
00:06:41.646   14:17:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:06:41.904  * Looking for test storage...
00:06:41.904  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:41.904     14:17:20 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:41.904     14:17:20 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@345 -- # : 1
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:41.904     14:17:20 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:41.904    14:17:20 alias_rpc -- scripts/common.sh@368 -- # return 0
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:41.904  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.904  		--rc genhtml_branch_coverage=1
00:06:41.904  		--rc genhtml_function_coverage=1
00:06:41.904  		--rc genhtml_legend=1
00:06:41.904  		--rc geninfo_all_blocks=1
00:06:41.904  		--rc geninfo_unexecuted_blocks=1
00:06:41.904  		
00:06:41.904  		'
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:41.904  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.904  		--rc genhtml_branch_coverage=1
00:06:41.904  		--rc genhtml_function_coverage=1
00:06:41.904  		--rc genhtml_legend=1
00:06:41.904  		--rc geninfo_all_blocks=1
00:06:41.904  		--rc geninfo_unexecuted_blocks=1
00:06:41.904  		
00:06:41.904  		'
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:41.904  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.904  		--rc genhtml_branch_coverage=1
00:06:41.904  		--rc genhtml_function_coverage=1
00:06:41.904  		--rc genhtml_legend=1
00:06:41.904  		--rc geninfo_all_blocks=1
00:06:41.904  		--rc geninfo_unexecuted_blocks=1
00:06:41.904  		
00:06:41.904  		'
00:06:41.904    14:17:20 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:41.904  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.904  		--rc genhtml_branch_coverage=1
00:06:41.905  		--rc genhtml_function_coverage=1
00:06:41.905  		--rc genhtml_legend=1
00:06:41.905  		--rc geninfo_all_blocks=1
00:06:41.905  		--rc geninfo_unexecuted_blocks=1
00:06:41.905  		
00:06:41.905  		'
00:06:41.905   14:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:06:41.905   14:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59117
00:06:41.905   14:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:41.905   14:17:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59117
00:06:41.905   14:17:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59117 ']'
00:06:41.905   14:17:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:41.905   14:17:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:41.905  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:41.905   14:17:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:41.905   14:17:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:41.905   14:17:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:42.163  [2024-11-20 14:17:20.888612] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:42.163  [2024-11-20 14:17:20.889550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59117 ]
00:06:42.163  [2024-11-20 14:17:21.068623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:42.421  [2024-11-20 14:17:21.169880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:42.988   14:17:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:42.988   14:17:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:06:42.988   14:17:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:06:43.580   14:17:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59117
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59117 ']'
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59117
00:06:43.580    14:17:22 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:43.580    14:17:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59117
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:43.580  killing process with pid 59117
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59117'
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 59117
00:06:43.580   14:17:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 59117
00:06:45.481  
00:06:45.481  real	0m3.790s
00:06:45.481  user	0m4.035s
00:06:45.481  sys	0m0.491s
00:06:45.481   14:17:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:45.481   14:17:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:06:45.481  ************************************
00:06:45.481  END TEST alias_rpc
00:06:45.481  ************************************
00:06:45.481   14:17:24  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:06:45.481   14:17:24  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:06:45.481   14:17:24  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:45.481   14:17:24  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:45.481   14:17:24  -- common/autotest_common.sh@10 -- # set +x
00:06:45.481  ************************************
00:06:45.481  START TEST spdkcli_tcp
00:06:45.481  ************************************
00:06:45.481   14:17:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:06:45.740  * Looking for test storage...
00:06:45.740  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:45.740     14:17:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:06:45.740     14:17:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:45.740     14:17:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:45.740    14:17:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:45.740  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:45.740  		--rc genhtml_branch_coverage=1
00:06:45.740  		--rc genhtml_function_coverage=1
00:06:45.740  		--rc genhtml_legend=1
00:06:45.740  		--rc geninfo_all_blocks=1
00:06:45.740  		--rc geninfo_unexecuted_blocks=1
00:06:45.740  		
00:06:45.740  		'
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:45.740  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:45.740  		--rc genhtml_branch_coverage=1
00:06:45.740  		--rc genhtml_function_coverage=1
00:06:45.740  		--rc genhtml_legend=1
00:06:45.740  		--rc geninfo_all_blocks=1
00:06:45.740  		--rc geninfo_unexecuted_blocks=1
00:06:45.740  		
00:06:45.740  		'
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:45.740  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:45.740  		--rc genhtml_branch_coverage=1
00:06:45.740  		--rc genhtml_function_coverage=1
00:06:45.740  		--rc genhtml_legend=1
00:06:45.740  		--rc geninfo_all_blocks=1
00:06:45.740  		--rc geninfo_unexecuted_blocks=1
00:06:45.740  		
00:06:45.740  		'
00:06:45.740    14:17:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:45.740  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:45.740  		--rc genhtml_branch_coverage=1
00:06:45.740  		--rc genhtml_function_coverage=1
00:06:45.740  		--rc genhtml_legend=1
00:06:45.740  		--rc geninfo_all_blocks=1
00:06:45.740  		--rc geninfo_unexecuted_blocks=1
00:06:45.740  		
00:06:45.740  		'
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:06:45.740    14:17:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:06:45.740    14:17:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:06:45.740   14:17:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:45.740   14:17:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59215
00:06:45.740   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59215
00:06:45.740   14:17:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59215 ']'
00:06:45.740   14:17:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:45.740   14:17:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:45.741   14:17:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:45.741  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:45.741   14:17:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:45.741   14:17:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:45.741   14:17:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:06:45.999  [2024-11-20 14:17:24.783791] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:45.999  [2024-11-20 14:17:24.783967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59215 ]
00:06:45.999  [2024-11-20 14:17:24.976162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:46.258  [2024-11-20 14:17:25.103167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:46.258  [2024-11-20 14:17:25.103178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:47.195   14:17:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:47.195   14:17:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:06:47.195   14:17:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59243
00:06:47.195   14:17:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:06:47.195   14:17:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:06:47.454  [
00:06:47.454    "bdev_malloc_delete",
00:06:47.454    "bdev_malloc_create",
00:06:47.454    "bdev_null_resize",
00:06:47.454    "bdev_null_delete",
00:06:47.454    "bdev_null_create",
00:06:47.454    "bdev_nvme_cuse_unregister",
00:06:47.454    "bdev_nvme_cuse_register",
00:06:47.454    "bdev_opal_new_user",
00:06:47.454    "bdev_opal_set_lock_state",
00:06:47.454    "bdev_opal_delete",
00:06:47.454    "bdev_opal_get_info",
00:06:47.454    "bdev_opal_create",
00:06:47.454    "bdev_nvme_opal_revert",
00:06:47.454    "bdev_nvme_opal_init",
00:06:47.454    "bdev_nvme_send_cmd",
00:06:47.454    "bdev_nvme_set_keys",
00:06:47.454    "bdev_nvme_get_path_iostat",
00:06:47.454    "bdev_nvme_get_mdns_discovery_info",
00:06:47.454    "bdev_nvme_stop_mdns_discovery",
00:06:47.454    "bdev_nvme_start_mdns_discovery",
00:06:47.454    "bdev_nvme_set_multipath_policy",
00:06:47.454    "bdev_nvme_set_preferred_path",
00:06:47.454    "bdev_nvme_get_io_paths",
00:06:47.454    "bdev_nvme_remove_error_injection",
00:06:47.454    "bdev_nvme_add_error_injection",
00:06:47.454    "bdev_nvme_get_discovery_info",
00:06:47.454    "bdev_nvme_stop_discovery",
00:06:47.454    "bdev_nvme_start_discovery",
00:06:47.454    "bdev_nvme_get_controller_health_info",
00:06:47.454    "bdev_nvme_disable_controller",
00:06:47.454    "bdev_nvme_enable_controller",
00:06:47.454    "bdev_nvme_reset_controller",
00:06:47.454    "bdev_nvme_get_transport_statistics",
00:06:47.454    "bdev_nvme_apply_firmware",
00:06:47.454    "bdev_nvme_detach_controller",
00:06:47.454    "bdev_nvme_get_controllers",
00:06:47.454    "bdev_nvme_attach_controller",
00:06:47.454    "bdev_nvme_set_hotplug",
00:06:47.454    "bdev_nvme_set_options",
00:06:47.454    "bdev_passthru_delete",
00:06:47.454    "bdev_passthru_create",
00:06:47.454    "bdev_lvol_set_parent_bdev",
00:06:47.454    "bdev_lvol_set_parent",
00:06:47.454    "bdev_lvol_check_shallow_copy",
00:06:47.454    "bdev_lvol_start_shallow_copy",
00:06:47.454    "bdev_lvol_grow_lvstore",
00:06:47.454    "bdev_lvol_get_lvols",
00:06:47.454    "bdev_lvol_get_lvstores",
00:06:47.454    "bdev_lvol_delete",
00:06:47.454    "bdev_lvol_set_read_only",
00:06:47.454    "bdev_lvol_resize",
00:06:47.454    "bdev_lvol_decouple_parent",
00:06:47.454    "bdev_lvol_inflate",
00:06:47.454    "bdev_lvol_rename",
00:06:47.454    "bdev_lvol_clone_bdev",
00:06:47.454    "bdev_lvol_clone",
00:06:47.454    "bdev_lvol_snapshot",
00:06:47.454    "bdev_lvol_create",
00:06:47.454    "bdev_lvol_delete_lvstore",
00:06:47.454    "bdev_lvol_rename_lvstore",
00:06:47.454    "bdev_lvol_create_lvstore",
00:06:47.454    "bdev_raid_set_options",
00:06:47.454    "bdev_raid_remove_base_bdev",
00:06:47.454    "bdev_raid_add_base_bdev",
00:06:47.454    "bdev_raid_delete",
00:06:47.454    "bdev_raid_create",
00:06:47.454    "bdev_raid_get_bdevs",
00:06:47.454    "bdev_error_inject_error",
00:06:47.454    "bdev_error_delete",
00:06:47.454    "bdev_error_create",
00:06:47.454    "bdev_split_delete",
00:06:47.454    "bdev_split_create",
00:06:47.454    "bdev_delay_delete",
00:06:47.454    "bdev_delay_create",
00:06:47.454    "bdev_delay_update_latency",
00:06:47.454    "bdev_zone_block_delete",
00:06:47.454    "bdev_zone_block_create",
00:06:47.454    "blobfs_create",
00:06:47.454    "blobfs_detect",
00:06:47.454    "blobfs_set_cache_size",
00:06:47.454    "bdev_xnvme_delete",
00:06:47.454    "bdev_xnvme_create",
00:06:47.454    "bdev_aio_delete",
00:06:47.454    "bdev_aio_rescan",
00:06:47.454    "bdev_aio_create",
00:06:47.454    "bdev_ftl_set_property",
00:06:47.454    "bdev_ftl_get_properties",
00:06:47.454    "bdev_ftl_get_stats",
00:06:47.454    "bdev_ftl_unmap",
00:06:47.454    "bdev_ftl_unload",
00:06:47.454    "bdev_ftl_delete",
00:06:47.454    "bdev_ftl_load",
00:06:47.454    "bdev_ftl_create",
00:06:47.454    "bdev_virtio_attach_controller",
00:06:47.454    "bdev_virtio_scsi_get_devices",
00:06:47.454    "bdev_virtio_detach_controller",
00:06:47.454    "bdev_virtio_blk_set_hotplug",
00:06:47.454    "bdev_iscsi_delete",
00:06:47.454    "bdev_iscsi_create",
00:06:47.454    "bdev_iscsi_set_options",
00:06:47.454    "accel_error_inject_error",
00:06:47.454    "ioat_scan_accel_module",
00:06:47.454    "dsa_scan_accel_module",
00:06:47.454    "iaa_scan_accel_module",
00:06:47.454    "keyring_file_remove_key",
00:06:47.454    "keyring_file_add_key",
00:06:47.454    "keyring_linux_set_options",
00:06:47.454    "fsdev_aio_delete",
00:06:47.454    "fsdev_aio_create",
00:06:47.454    "iscsi_get_histogram",
00:06:47.454    "iscsi_enable_histogram",
00:06:47.454    "iscsi_set_options",
00:06:47.454    "iscsi_get_auth_groups",
00:06:47.454    "iscsi_auth_group_remove_secret",
00:06:47.454    "iscsi_auth_group_add_secret",
00:06:47.454    "iscsi_delete_auth_group",
00:06:47.454    "iscsi_create_auth_group",
00:06:47.454    "iscsi_set_discovery_auth",
00:06:47.454    "iscsi_get_options",
00:06:47.455    "iscsi_target_node_request_logout",
00:06:47.455    "iscsi_target_node_set_redirect",
00:06:47.455    "iscsi_target_node_set_auth",
00:06:47.455    "iscsi_target_node_add_lun",
00:06:47.455    "iscsi_get_stats",
00:06:47.455    "iscsi_get_connections",
00:06:47.455    "iscsi_portal_group_set_auth",
00:06:47.455    "iscsi_start_portal_group",
00:06:47.455    "iscsi_delete_portal_group",
00:06:47.455    "iscsi_create_portal_group",
00:06:47.455    "iscsi_get_portal_groups",
00:06:47.455    "iscsi_delete_target_node",
00:06:47.455    "iscsi_target_node_remove_pg_ig_maps",
00:06:47.455    "iscsi_target_node_add_pg_ig_maps",
00:06:47.455    "iscsi_create_target_node",
00:06:47.455    "iscsi_get_target_nodes",
00:06:47.455    "iscsi_delete_initiator_group",
00:06:47.455    "iscsi_initiator_group_remove_initiators",
00:06:47.455    "iscsi_initiator_group_add_initiators",
00:06:47.455    "iscsi_create_initiator_group",
00:06:47.455    "iscsi_get_initiator_groups",
00:06:47.455    "nvmf_set_crdt",
00:06:47.455    "nvmf_set_config",
00:06:47.455    "nvmf_set_max_subsystems",
00:06:47.455    "nvmf_stop_mdns_prr",
00:06:47.455    "nvmf_publish_mdns_prr",
00:06:47.455    "nvmf_subsystem_get_listeners",
00:06:47.455    "nvmf_subsystem_get_qpairs",
00:06:47.455    "nvmf_subsystem_get_controllers",
00:06:47.455    "nvmf_get_stats",
00:06:47.455    "nvmf_get_transports",
00:06:47.455    "nvmf_create_transport",
00:06:47.455    "nvmf_get_targets",
00:06:47.455    "nvmf_delete_target",
00:06:47.455    "nvmf_create_target",
00:06:47.455    "nvmf_subsystem_allow_any_host",
00:06:47.455    "nvmf_subsystem_set_keys",
00:06:47.455    "nvmf_subsystem_remove_host",
00:06:47.455    "nvmf_subsystem_add_host",
00:06:47.455    "nvmf_ns_remove_host",
00:06:47.455    "nvmf_ns_add_host",
00:06:47.455    "nvmf_subsystem_remove_ns",
00:06:47.455    "nvmf_subsystem_set_ns_ana_group",
00:06:47.455    "nvmf_subsystem_add_ns",
00:06:47.455    "nvmf_subsystem_listener_set_ana_state",
00:06:47.455    "nvmf_discovery_get_referrals",
00:06:47.455    "nvmf_discovery_remove_referral",
00:06:47.455    "nvmf_discovery_add_referral",
00:06:47.455    "nvmf_subsystem_remove_listener",
00:06:47.455    "nvmf_subsystem_add_listener",
00:06:47.455    "nvmf_delete_subsystem",
00:06:47.455    "nvmf_create_subsystem",
00:06:47.455    "nvmf_get_subsystems",
00:06:47.455    "env_dpdk_get_mem_stats",
00:06:47.455    "nbd_get_disks",
00:06:47.455    "nbd_stop_disk",
00:06:47.455    "nbd_start_disk",
00:06:47.455    "ublk_recover_disk",
00:06:47.455    "ublk_get_disks",
00:06:47.455    "ublk_stop_disk",
00:06:47.455    "ublk_start_disk",
00:06:47.455    "ublk_destroy_target",
00:06:47.455    "ublk_create_target",
00:06:47.455    "virtio_blk_create_transport",
00:06:47.455    "virtio_blk_get_transports",
00:06:47.455    "vhost_controller_set_coalescing",
00:06:47.455    "vhost_get_controllers",
00:06:47.455    "vhost_delete_controller",
00:06:47.455    "vhost_create_blk_controller",
00:06:47.455    "vhost_scsi_controller_remove_target",
00:06:47.455    "vhost_scsi_controller_add_target",
00:06:47.455    "vhost_start_scsi_controller",
00:06:47.455    "vhost_create_scsi_controller",
00:06:47.455    "thread_set_cpumask",
00:06:47.455    "scheduler_set_options",
00:06:47.455    "framework_get_governor",
00:06:47.455    "framework_get_scheduler",
00:06:47.455    "framework_set_scheduler",
00:06:47.455    "framework_get_reactors",
00:06:47.455    "thread_get_io_channels",
00:06:47.455    "thread_get_pollers",
00:06:47.455    "thread_get_stats",
00:06:47.455    "framework_monitor_context_switch",
00:06:47.455    "spdk_kill_instance",
00:06:47.455    "log_enable_timestamps",
00:06:47.455    "log_get_flags",
00:06:47.455    "log_clear_flag",
00:06:47.455    "log_set_flag",
00:06:47.455    "log_get_level",
00:06:47.455    "log_set_level",
00:06:47.455    "log_get_print_level",
00:06:47.455    "log_set_print_level",
00:06:47.455    "framework_enable_cpumask_locks",
00:06:47.455    "framework_disable_cpumask_locks",
00:06:47.455    "framework_wait_init",
00:06:47.455    "framework_start_init",
00:06:47.455    "scsi_get_devices",
00:06:47.455    "bdev_get_histogram",
00:06:47.455    "bdev_enable_histogram",
00:06:47.455    "bdev_set_qos_limit",
00:06:47.455    "bdev_set_qd_sampling_period",
00:06:47.455    "bdev_get_bdevs",
00:06:47.455    "bdev_reset_iostat",
00:06:47.455    "bdev_get_iostat",
00:06:47.455    "bdev_examine",
00:06:47.455    "bdev_wait_for_examine",
00:06:47.455    "bdev_set_options",
00:06:47.455    "accel_get_stats",
00:06:47.455    "accel_set_options",
00:06:47.455    "accel_set_driver",
00:06:47.455    "accel_crypto_key_destroy",
00:06:47.455    "accel_crypto_keys_get",
00:06:47.455    "accel_crypto_key_create",
00:06:47.455    "accel_assign_opc",
00:06:47.455    "accel_get_module_info",
00:06:47.455    "accel_get_opc_assignments",
00:06:47.455    "vmd_rescan",
00:06:47.455    "vmd_remove_device",
00:06:47.455    "vmd_enable",
00:06:47.455    "sock_get_default_impl",
00:06:47.455    "sock_set_default_impl",
00:06:47.455    "sock_impl_set_options",
00:06:47.455    "sock_impl_get_options",
00:06:47.455    "iobuf_get_stats",
00:06:47.455    "iobuf_set_options",
00:06:47.455    "keyring_get_keys",
00:06:47.455    "framework_get_pci_devices",
00:06:47.455    "framework_get_config",
00:06:47.455    "framework_get_subsystems",
00:06:47.455    "fsdev_set_opts",
00:06:47.455    "fsdev_get_opts",
00:06:47.455    "trace_get_info",
00:06:47.455    "trace_get_tpoint_group_mask",
00:06:47.455    "trace_disable_tpoint_group",
00:06:47.455    "trace_enable_tpoint_group",
00:06:47.455    "trace_clear_tpoint_mask",
00:06:47.455    "trace_set_tpoint_mask",
00:06:47.455    "notify_get_notifications",
00:06:47.455    "notify_get_types",
00:06:47.455    "spdk_get_version",
00:06:47.455    "rpc_get_methods"
00:06:47.455  ]
00:06:47.455   14:17:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:47.455   14:17:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:06:47.455   14:17:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59215
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59215 ']'
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59215
00:06:47.455    14:17:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:47.455    14:17:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59215
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59215'
00:06:47.455  killing process with pid 59215
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59215
00:06:47.455   14:17:26 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59215
00:06:49.987  
00:06:49.987  real	0m4.050s
00:06:49.987  user	0m7.479s
00:06:49.987  sys	0m0.542s
00:06:49.987   14:17:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:49.987   14:17:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:06:49.987  ************************************
00:06:49.987  END TEST spdkcli_tcp
00:06:49.987  ************************************
00:06:49.987   14:17:28  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:06:49.987   14:17:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:49.987   14:17:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:49.987   14:17:28  -- common/autotest_common.sh@10 -- # set +x
00:06:49.987  ************************************
00:06:49.987  START TEST dpdk_mem_utility
00:06:49.987  ************************************
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:06:49.987  * Looking for test storage...
00:06:49.987  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:49.987     14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:06:49.987     14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:49.987     14:17:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:49.987    14:17:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:49.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:49.987  		--rc genhtml_branch_coverage=1
00:06:49.987  		--rc genhtml_function_coverage=1
00:06:49.987  		--rc genhtml_legend=1
00:06:49.987  		--rc geninfo_all_blocks=1
00:06:49.987  		--rc geninfo_unexecuted_blocks=1
00:06:49.987  		
00:06:49.987  		'
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:49.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:49.987  		--rc genhtml_branch_coverage=1
00:06:49.987  		--rc genhtml_function_coverage=1
00:06:49.987  		--rc genhtml_legend=1
00:06:49.987  		--rc geninfo_all_blocks=1
00:06:49.987  		--rc geninfo_unexecuted_blocks=1
00:06:49.987  		
00:06:49.987  		'
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:49.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:49.987  		--rc genhtml_branch_coverage=1
00:06:49.987  		--rc genhtml_function_coverage=1
00:06:49.987  		--rc genhtml_legend=1
00:06:49.987  		--rc geninfo_all_blocks=1
00:06:49.987  		--rc geninfo_unexecuted_blocks=1
00:06:49.987  		
00:06:49.987  		'
00:06:49.987    14:17:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:49.987  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:49.987  		--rc genhtml_branch_coverage=1
00:06:49.987  		--rc genhtml_function_coverage=1
00:06:49.987  		--rc genhtml_legend=1
00:06:49.987  		--rc geninfo_all_blocks=1
00:06:49.987  		--rc geninfo_unexecuted_blocks=1
00:06:49.987  		
00:06:49.987  		'
00:06:49.987   14:17:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:06:49.987   14:17:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59337
00:06:49.987   14:17:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:06:49.987   14:17:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59337
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59337 ']'
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:49.987  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:49.987   14:17:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:06:49.987  [2024-11-20 14:17:28.841166] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:49.987  [2024-11-20 14:17:28.841348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59337 ]
00:06:50.244  [2024-11-20 14:17:29.019667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:50.244  [2024-11-20 14:17:29.125559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:51.179   14:17:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:51.179   14:17:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:06:51.179   14:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:06:51.179   14:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:06:51.179   14:17:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:51.179   14:17:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:06:51.179  {
00:06:51.179  "filename": "/tmp/spdk_mem_dump.txt"
00:06:51.179  }
00:06:51.179   14:17:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:51.179   14:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:06:51.179  DPDK memory size 824.000000 MiB in 1 heap(s)
00:06:51.179  1 heaps totaling size 824.000000 MiB
00:06:51.179    size:  824.000000 MiB heap id: 0
00:06:51.179  end heaps----------
00:06:51.179  9 mempools totaling size 603.782043 MiB
00:06:51.179    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:06:51.179    size:  158.602051 MiB name: PDU_data_out_Pool
00:06:51.179    size:  100.555481 MiB name: bdev_io_59337
00:06:51.179    size:   50.003479 MiB name: msgpool_59337
00:06:51.179    size:   36.509338 MiB name: fsdev_io_59337
00:06:51.179    size:   21.763794 MiB name: PDU_Pool
00:06:51.179    size:   19.513306 MiB name: SCSI_TASK_Pool
00:06:51.179    size:    4.133484 MiB name: evtpool_59337
00:06:51.179    size:    0.026123 MiB name: Session_Pool
00:06:51.179  end mempools-------
00:06:51.179  6 memzones totaling size 4.142822 MiB
00:06:51.179    size:    1.000366 MiB name: RG_ring_0_59337
00:06:51.179    size:    1.000366 MiB name: RG_ring_1_59337
00:06:51.179    size:    1.000366 MiB name: RG_ring_4_59337
00:06:51.179    size:    1.000366 MiB name: RG_ring_5_59337
00:06:51.179    size:    0.125366 MiB name: RG_ring_2_59337
00:06:51.179    size:    0.015991 MiB name: RG_ring_3_59337
00:06:51.179  end memzones-------
00:06:51.179   14:17:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:06:51.179  heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18
00:06:51.179    list of free elements. size: 16.781860 MiB
00:06:51.179      element at address: 0x200006400000 with size:    1.995972 MiB
00:06:51.179      element at address: 0x20000a600000 with size:    1.995972 MiB
00:06:51.179      element at address: 0x200003e00000 with size:    1.991028 MiB
00:06:51.179      element at address: 0x200019500040 with size:    0.999939 MiB
00:06:51.179      element at address: 0x200019900040 with size:    0.999939 MiB
00:06:51.179      element at address: 0x200019a00000 with size:    0.999084 MiB
00:06:51.179      element at address: 0x200032600000 with size:    0.994324 MiB
00:06:51.179      element at address: 0x200000400000 with size:    0.992004 MiB
00:06:51.179      element at address: 0x200019200000 with size:    0.959656 MiB
00:06:51.179      element at address: 0x200019d00040 with size:    0.936401 MiB
00:06:51.179      element at address: 0x200000200000 with size:    0.716980 MiB
00:06:51.179      element at address: 0x20001b400000 with size:    0.563416 MiB
00:06:51.179      element at address: 0x200000c00000 with size:    0.489197 MiB
00:06:51.179      element at address: 0x200019600000 with size:    0.487976 MiB
00:06:51.179      element at address: 0x200019e00000 with size:    0.485413 MiB
00:06:51.179      element at address: 0x200012c00000 with size:    0.433228 MiB
00:06:51.179      element at address: 0x200028800000 with size:    0.390442 MiB
00:06:51.179      element at address: 0x200000800000 with size:    0.350891 MiB
00:06:51.179    list of standard malloc elements. size: 199.287231 MiB
00:06:51.179      element at address: 0x20000a7fef80 with size:  132.000183 MiB
00:06:51.179      element at address: 0x2000065fef80 with size:   64.000183 MiB
00:06:51.179      element at address: 0x2000193fff80 with size:    1.000183 MiB
00:06:51.179      element at address: 0x2000197fff80 with size:    1.000183 MiB
00:06:51.179      element at address: 0x200019bfff80 with size:    1.000183 MiB
00:06:51.179      element at address: 0x2000003d9e80 with size:    0.140808 MiB
00:06:51.179      element at address: 0x200019deff40 with size:    0.062683 MiB
00:06:51.179      element at address: 0x2000003fdf40 with size:    0.007996 MiB
00:06:51.179      element at address: 0x20000a5ff040 with size:    0.000427 MiB
00:06:51.179      element at address: 0x200019defdc0 with size:    0.000366 MiB
00:06:51.179      element at address: 0x200012bff040 with size:    0.000305 MiB
00:06:51.179      element at address: 0x2000002d7b00 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000003d9d80 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fdf40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe040 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe140 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe240 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe340 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe440 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe540 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe640 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe740 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe840 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fe940 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fea40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004feb40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fec40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fed40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fee40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004fef40 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff040 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff140 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff240 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff340 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff440 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff540 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff640 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff740 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff840 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ff940 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ffbc0 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ffcc0 with size:    0.000244 MiB
00:06:51.179      element at address: 0x2000004ffdc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e1c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e2c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e3c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e4c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e5c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e6c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e7c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e8c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087e9c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087eac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087ebc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087ecc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087edc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087eec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087efc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087f0c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087f1c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087f2c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087f3c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000087f4c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x2000008ff800 with size:    0.000244 MiB
00:06:51.180      element at address: 0x2000008ffa80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d3c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d4c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d5c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d6c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d7c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d8c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7d9c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7dac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7dbc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7dcc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7ddc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7dec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7dfc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e0c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e1c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e2c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e3c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e4c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e5c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e6c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e7c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e8c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7e9c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7eac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000c7ebc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000cfef00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200000cff000 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff200 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff300 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff400 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff500 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff600 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff700 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff800 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ff900 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ffa00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ffb00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ffc00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ffd00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5ffe00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20000a5fff00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff180 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff280 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff380 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff480 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff580 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff680 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff780 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff880 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bff980 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bffa80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bffb80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bffc80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012bfff00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6ee80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6ef80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f080 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f180 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f280 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f380 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f480 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f580 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f680 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f780 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012c6f880 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200012cefbc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x2000192fdd00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967cec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967cfc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d0c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d1c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d2c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d3c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d4c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d5c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d6c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d7c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d8c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001967d9c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x2000196fdd00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200019affc40 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200019defbc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200019defcc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200019ebc680 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4903c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4904c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4905c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4906c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4907c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4908c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4909c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b490ac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b490bc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b490cc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b490dc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b490ec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b490fc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4910c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4911c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4912c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4913c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4914c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4915c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4916c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4917c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4918c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4919c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b491ac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b491bc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b491cc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b491dc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b491ec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b491fc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4920c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4921c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4922c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4923c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4924c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4925c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4926c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4927c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4928c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4929c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b492ac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b492bc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b492cc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b492dc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b492ec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b492fc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4930c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4931c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4932c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4933c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4934c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4935c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4936c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4937c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4938c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4939c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b493ac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b493bc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b493cc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b493dc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b493ec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b493fc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4940c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4941c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4942c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4943c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4944c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4945c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4946c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4947c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4948c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4949c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b494ac0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b494bc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b494cc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b494dc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b494ec0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b494fc0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4950c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4951c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4952c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20001b4953c0 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200028863f40 with size:    0.000244 MiB
00:06:51.180      element at address: 0x200028864040 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20002886ad00 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20002886af80 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20002886b080 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20002886b180 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20002886b280 with size:    0.000244 MiB
00:06:51.180      element at address: 0x20002886b380 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886b480 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886b580 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886b680 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886b780 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886b880 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886b980 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ba80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886bb80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886bc80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886bd80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886be80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886bf80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c080 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c180 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c280 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c380 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c480 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c580 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c680 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c780 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c880 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886c980 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ca80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886cb80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886cc80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886cd80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ce80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886cf80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d080 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d180 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d280 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d380 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d480 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d580 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d680 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d780 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d880 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886d980 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886da80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886db80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886dc80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886dd80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886de80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886df80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e080 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e180 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e280 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e380 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e480 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e580 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e680 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e780 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e880 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886e980 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ea80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886eb80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ec80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ed80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ee80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886ef80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f080 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f180 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f280 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f380 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f480 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f580 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f680 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f780 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f880 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886f980 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886fa80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886fb80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886fc80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886fd80 with size:    0.000244 MiB
00:06:51.181      element at address: 0x20002886fe80 with size:    0.000244 MiB
00:06:51.181    list of memzone associated elements. size: 607.930908 MiB
00:06:51.181      element at address: 0x20001b4954c0 with size:  211.416809 MiB
00:06:51.181        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:06:51.181      element at address: 0x20002886ff80 with size:  157.562622 MiB
00:06:51.181        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:06:51.181      element at address: 0x200012df1e40 with size:  100.055115 MiB
00:06:51.181        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_59337_0
00:06:51.181      element at address: 0x200000dff340 with size:   48.003113 MiB
00:06:51.181        associated memzone info: size:   48.002930 MiB name: MP_msgpool_59337_0
00:06:51.181      element at address: 0x200003ffdb40 with size:   36.008972 MiB
00:06:51.181        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_59337_0
00:06:51.181      element at address: 0x200019fbe900 with size:   20.255615 MiB
00:06:51.181        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:06:51.181      element at address: 0x2000327feb00 with size:   18.005127 MiB
00:06:51.181        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:06:51.181      element at address: 0x2000004ffec0 with size:    3.000305 MiB
00:06:51.181        associated memzone info: size:    3.000122 MiB name: MP_evtpool_59337_0
00:06:51.181      element at address: 0x2000009ffdc0 with size:    2.000549 MiB
00:06:51.181        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_59337
00:06:51.181      element at address: 0x2000002d7c00 with size:    1.008179 MiB
00:06:51.181        associated memzone info: size:    1.007996 MiB name: MP_evtpool_59337
00:06:51.181      element at address: 0x2000196fde00 with size:    1.008179 MiB
00:06:51.181        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:06:51.181      element at address: 0x200019ebc780 with size:    1.008179 MiB
00:06:51.181        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:06:51.181      element at address: 0x2000192fde00 with size:    1.008179 MiB
00:06:51.181        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:06:51.181      element at address: 0x200012cefcc0 with size:    1.008179 MiB
00:06:51.181        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:06:51.181      element at address: 0x200000cff100 with size:    1.000549 MiB
00:06:51.181        associated memzone info: size:    1.000366 MiB name: RG_ring_0_59337
00:06:51.181      element at address: 0x2000008ffb80 with size:    1.000549 MiB
00:06:51.181        associated memzone info: size:    1.000366 MiB name: RG_ring_1_59337
00:06:51.181      element at address: 0x200019affd40 with size:    1.000549 MiB
00:06:51.181        associated memzone info: size:    1.000366 MiB name: RG_ring_4_59337
00:06:51.181      element at address: 0x2000326fe8c0 with size:    1.000549 MiB
00:06:51.181        associated memzone info: size:    1.000366 MiB name: RG_ring_5_59337
00:06:51.181      element at address: 0x20000087f5c0 with size:    0.500549 MiB
00:06:51.181        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_59337
00:06:51.181      element at address: 0x200000c7ecc0 with size:    0.500549 MiB
00:06:51.181        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_59337
00:06:51.181      element at address: 0x20001967dac0 with size:    0.500549 MiB
00:06:51.181        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:06:51.181      element at address: 0x200012c6f980 with size:    0.500549 MiB
00:06:51.181        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:06:51.181      element at address: 0x200019e7c440 with size:    0.250549 MiB
00:06:51.181        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:06:51.181      element at address: 0x2000002b78c0 with size:    0.125549 MiB
00:06:51.181        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_59337
00:06:51.181      element at address: 0x20000085df80 with size:    0.125549 MiB
00:06:51.181        associated memzone info: size:    0.125366 MiB name: RG_ring_2_59337
00:06:51.181      element at address: 0x2000192f5ac0 with size:    0.031799 MiB
00:06:51.181        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:06:51.181      element at address: 0x200028864140 with size:    0.023804 MiB
00:06:51.181        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:06:51.181      element at address: 0x200000859d40 with size:    0.016174 MiB
00:06:51.181        associated memzone info: size:    0.015991 MiB name: RG_ring_3_59337
00:06:51.181      element at address: 0x20002886a2c0 with size:    0.002502 MiB
00:06:51.181        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:06:51.181      element at address: 0x2000004ffa40 with size:    0.000366 MiB
00:06:51.181        associated memzone info: size:    0.000183 MiB name: MP_msgpool_59337
00:06:51.181      element at address: 0x2000008ff900 with size:    0.000366 MiB
00:06:51.181        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_59337
00:06:51.181      element at address: 0x200012bffd80 with size:    0.000366 MiB
00:06:51.181        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_59337
00:06:51.181      element at address: 0x20002886ae00 with size:    0.000366 MiB
00:06:51.181        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:06:51.181   14:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:06:51.181   14:17:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59337
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59337 ']'
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59337
00:06:51.181    14:17:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:06:51.181    14:17:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59337
00:06:51.181  killing process with pid 59337
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59337'
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59337
00:06:51.181   14:17:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59337
00:06:53.709  
00:06:53.709  real	0m3.652s
00:06:53.709  user	0m3.804s
00:06:53.709  sys	0m0.472s
00:06:53.709   14:17:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:53.709  ************************************
00:06:53.709  END TEST dpdk_mem_utility
00:06:53.709  ************************************
00:06:53.709   14:17:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:06:53.709   14:17:32  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:06:53.709   14:17:32  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:53.709   14:17:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:53.709   14:17:32  -- common/autotest_common.sh@10 -- # set +x
00:06:53.709  ************************************
00:06:53.709  START TEST event
00:06:53.709  ************************************
00:06:53.709   14:17:32 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:06:53.709  * Looking for test storage...
00:06:53.709  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:06:53.709    14:17:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:53.709     14:17:32 event -- common/autotest_common.sh@1693 -- # lcov --version
00:06:53.709     14:17:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:53.709    14:17:32 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:53.709    14:17:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:53.709    14:17:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:53.709    14:17:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:53.709    14:17:32 event -- scripts/common.sh@336 -- # IFS=.-:
00:06:53.709    14:17:32 event -- scripts/common.sh@336 -- # read -ra ver1
00:06:53.709    14:17:32 event -- scripts/common.sh@337 -- # IFS=.-:
00:06:53.709    14:17:32 event -- scripts/common.sh@337 -- # read -ra ver2
00:06:53.709    14:17:32 event -- scripts/common.sh@338 -- # local 'op=<'
00:06:53.709    14:17:32 event -- scripts/common.sh@340 -- # ver1_l=2
00:06:53.709    14:17:32 event -- scripts/common.sh@341 -- # ver2_l=1
00:06:53.709    14:17:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:53.709    14:17:32 event -- scripts/common.sh@344 -- # case "$op" in
00:06:53.709    14:17:32 event -- scripts/common.sh@345 -- # : 1
00:06:53.709    14:17:32 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:53.709    14:17:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:53.709     14:17:32 event -- scripts/common.sh@365 -- # decimal 1
00:06:53.709     14:17:32 event -- scripts/common.sh@353 -- # local d=1
00:06:53.709     14:17:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:53.709     14:17:32 event -- scripts/common.sh@355 -- # echo 1
00:06:53.709    14:17:32 event -- scripts/common.sh@365 -- # ver1[v]=1
00:06:53.709     14:17:32 event -- scripts/common.sh@366 -- # decimal 2
00:06:53.709     14:17:32 event -- scripts/common.sh@353 -- # local d=2
00:06:53.709     14:17:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:53.709     14:17:32 event -- scripts/common.sh@355 -- # echo 2
00:06:53.709    14:17:32 event -- scripts/common.sh@366 -- # ver2[v]=2
00:06:53.709    14:17:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:53.710    14:17:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:53.710    14:17:32 event -- scripts/common.sh@368 -- # return 0
00:06:53.710    14:17:32 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:53.710    14:17:32 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:53.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.710  		--rc genhtml_branch_coverage=1
00:06:53.710  		--rc genhtml_function_coverage=1
00:06:53.710  		--rc genhtml_legend=1
00:06:53.710  		--rc geninfo_all_blocks=1
00:06:53.710  		--rc geninfo_unexecuted_blocks=1
00:06:53.710  		
00:06:53.710  		'
00:06:53.710    14:17:32 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:53.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.710  		--rc genhtml_branch_coverage=1
00:06:53.710  		--rc genhtml_function_coverage=1
00:06:53.710  		--rc genhtml_legend=1
00:06:53.710  		--rc geninfo_all_blocks=1
00:06:53.710  		--rc geninfo_unexecuted_blocks=1
00:06:53.710  		
00:06:53.710  		'
00:06:53.710    14:17:32 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:53.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.710  		--rc genhtml_branch_coverage=1
00:06:53.710  		--rc genhtml_function_coverage=1
00:06:53.710  		--rc genhtml_legend=1
00:06:53.710  		--rc geninfo_all_blocks=1
00:06:53.710  		--rc geninfo_unexecuted_blocks=1
00:06:53.710  		
00:06:53.710  		'
00:06:53.710    14:17:32 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:53.710  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.710  		--rc genhtml_branch_coverage=1
00:06:53.710  		--rc genhtml_function_coverage=1
00:06:53.710  		--rc genhtml_legend=1
00:06:53.710  		--rc geninfo_all_blocks=1
00:06:53.710  		--rc geninfo_unexecuted_blocks=1
00:06:53.710  		
00:06:53.710  		'
00:06:53.710   14:17:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:06:53.710    14:17:32 event -- bdev/nbd_common.sh@6 -- # set -e
00:06:53.710   14:17:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:06:53.710   14:17:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:06:53.710   14:17:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:53.710   14:17:32 event -- common/autotest_common.sh@10 -- # set +x
00:06:53.710  ************************************
00:06:53.710  START TEST event_perf
00:06:53.710  ************************************
00:06:53.710   14:17:32 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:06:53.710  Running I/O for 1 seconds...[2024-11-20 14:17:32.481723] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:53.710  [2024-11-20 14:17:32.481874] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ]
00:06:53.710  [2024-11-20 14:17:32.669718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:53.977  [2024-11-20 14:17:32.804841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:53.977  [2024-11-20 14:17:32.804975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:53.977  Running I/O for 1 seconds...[2024-11-20 14:17:32.805078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:53.977  [2024-11-20 14:17:32.805296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:55.354  
00:06:55.354  lcore  0:   193934
00:06:55.354  lcore  1:   193931
00:06:55.354  lcore  2:   193932
00:06:55.354  lcore  3:   193933
00:06:55.354  done.
00:06:55.354  
00:06:55.354  real	0m1.605s
00:06:55.354  user	0m4.369s
00:06:55.354  sys	0m0.109s
00:06:55.354   14:17:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:55.354   14:17:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:06:55.354  ************************************
00:06:55.354  END TEST event_perf
00:06:55.354  ************************************
00:06:55.354   14:17:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:06:55.354   14:17:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:06:55.354   14:17:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:55.354   14:17:34 event -- common/autotest_common.sh@10 -- # set +x
00:06:55.354  ************************************
00:06:55.354  START TEST event_reactor
00:06:55.354  ************************************
00:06:55.355   14:17:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:06:55.355  [2024-11-20 14:17:34.131512] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:55.355  [2024-11-20 14:17:34.131665] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59486 ]
00:06:55.355  [2024-11-20 14:17:34.309844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:55.614  [2024-11-20 14:17:34.439663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.989  test_start
00:06:56.989  oneshot
00:06:56.989  tick 100
00:06:56.989  tick 100
00:06:56.989  tick 250
00:06:56.989  tick 100
00:06:56.989  tick 100
00:06:56.989  tick 250
00:06:56.989  tick 500
00:06:56.989  tick 100
00:06:56.989  tick 100
00:06:56.989  tick 100
00:06:56.989  tick 250
00:06:56.989  tick 100
00:06:56.989  tick 100
00:06:56.989  test_end
00:06:56.989  
00:06:56.989  real	0m1.571s
00:06:56.989  user	0m1.383s
00:06:56.989  sys	0m0.077s
00:06:56.989   14:17:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:56.989  ************************************
00:06:56.989  END TEST event_reactor
00:06:56.989   14:17:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:06:56.989  ************************************
00:06:56.989   14:17:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:06:56.989   14:17:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:06:56.989   14:17:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:56.990   14:17:35 event -- common/autotest_common.sh@10 -- # set +x
00:06:56.990  ************************************
00:06:56.990  START TEST event_reactor_perf
00:06:56.990  ************************************
00:06:56.990   14:17:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:06:56.990  [2024-11-20 14:17:35.749312] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:56.990  [2024-11-20 14:17:35.749648] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59522 ]
00:06:56.990  [2024-11-20 14:17:35.925082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:57.249  [2024-11-20 14:17:36.029957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:58.627  test_start
00:06:58.627  test_end
00:06:58.627  Performance:   267662 events per second
00:06:58.627  ************************************
00:06:58.627  END TEST event_reactor_perf
00:06:58.627  ************************************
00:06:58.627  
00:06:58.627  real	0m1.552s
00:06:58.627  user	0m1.361s
00:06:58.627  sys	0m0.080s
00:06:58.627   14:17:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:58.627   14:17:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:06:58.627    14:17:37 event -- event/event.sh@49 -- # uname -s
00:06:58.627   14:17:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:06:58.627   14:17:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:06:58.627   14:17:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:58.627   14:17:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:58.627   14:17:37 event -- common/autotest_common.sh@10 -- # set +x
00:06:58.627  ************************************
00:06:58.627  START TEST event_scheduler
00:06:58.627  ************************************
00:06:58.627   14:17:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:06:58.627  * Looking for test storage...
00:06:58.627  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:06:58.627    14:17:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:58.627     14:17:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:58.627     14:17:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:06:58.627    14:17:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:58.627     14:17:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:06:58.627  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:58.627    14:17:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:06:58.627    14:17:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:58.627    14:17:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:58.627  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.627  		--rc genhtml_branch_coverage=1
00:06:58.627  		--rc genhtml_function_coverage=1
00:06:58.627  		--rc genhtml_legend=1
00:06:58.627  		--rc geninfo_all_blocks=1
00:06:58.627  		--rc geninfo_unexecuted_blocks=1
00:06:58.628  		
00:06:58.628  		'
00:06:58.628    14:17:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:58.628  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.628  		--rc genhtml_branch_coverage=1
00:06:58.628  		--rc genhtml_function_coverage=1
00:06:58.628  		--rc genhtml_legend=1
00:06:58.628  		--rc geninfo_all_blocks=1
00:06:58.628  		--rc geninfo_unexecuted_blocks=1
00:06:58.628  		
00:06:58.628  		'
00:06:58.628    14:17:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:58.628  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.628  		--rc genhtml_branch_coverage=1
00:06:58.628  		--rc genhtml_function_coverage=1
00:06:58.628  		--rc genhtml_legend=1
00:06:58.628  		--rc geninfo_all_blocks=1
00:06:58.628  		--rc geninfo_unexecuted_blocks=1
00:06:58.628  		
00:06:58.628  		'
00:06:58.628    14:17:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:58.628  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:58.628  		--rc genhtml_branch_coverage=1
00:06:58.628  		--rc genhtml_function_coverage=1
00:06:58.628  		--rc genhtml_legend=1
00:06:58.628  		--rc geninfo_all_blocks=1
00:06:58.628  		--rc geninfo_unexecuted_blocks=1
00:06:58.628  		
00:06:58.628  		'
00:06:58.628   14:17:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:06:58.628   14:17:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59598
00:06:58.628   14:17:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:06:58.628   14:17:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59598
00:06:58.628   14:17:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59598 ']'
00:06:58.628   14:17:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:06:58.628   14:17:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:58.628   14:17:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:58.628   14:17:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:58.628   14:17:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:58.628   14:17:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:58.628  [2024-11-20 14:17:37.600822] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:06:58.628  [2024-11-20 14:17:37.601213] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59598 ]
00:06:58.887  [2024-11-20 14:17:37.791890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:59.145  [2024-11-20 14:17:37.947960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:59.145  [2024-11-20 14:17:37.948074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:59.145  [2024-11-20 14:17:37.948131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:59.145  [2024-11-20 14:17:37.948129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:06:59.725   14:17:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:06:59.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:06:59.725  POWER: Cannot set governor of lcore 0 to userspace
00:06:59.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:06:59.725  POWER: Cannot set governor of lcore 0 to performance
00:06:59.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:06:59.725  POWER: Cannot set governor of lcore 0 to userspace
00:06:59.725  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:06:59.725  POWER: Cannot set governor of lcore 0 to userspace
00:06:59.725  GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0
00:06:59.725  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:06:59.725  POWER: Unable to set Power Management Environment for lcore 0
00:06:59.725  [2024-11-20 14:17:38.651417] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:06:59.725  [2024-11-20 14:17:38.651555] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:06:59.725  [2024-11-20 14:17:38.651688] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:06:59.725  [2024-11-20 14:17:38.651771] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:06:59.725  [2024-11-20 14:17:38.651892] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:06:59.725  [2024-11-20 14:17:38.651970] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:59.725   14:17:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:59.725   14:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:00.008  [2024-11-20 14:17:38.950329] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:07:00.008   14:17:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.008   14:17:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:07:00.008   14:17:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:00.008   14:17:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:00.008   14:17:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:00.008  ************************************
00:07:00.008  START TEST scheduler_create_thread
00:07:00.008  ************************************
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.008  2
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.008  3
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.008   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  4
00:07:00.267   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:07:00.267   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  5
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  6
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  7
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  8
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  9
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267  10
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:00.267   14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:00.267    14:17:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:01.644    14:17:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:01.644   14:17:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:07:01.644   14:17:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:07:01.644   14:17:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:01.644   14:17:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:03.020  ************************************
00:07:03.020  END TEST scheduler_create_thread
00:07:03.020  ************************************
00:07:03.020   14:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:03.020  
00:07:03.020  real	0m2.620s
00:07:03.020  user	0m0.019s
00:07:03.020  sys	0m0.004s
00:07:03.020   14:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:03.020   14:17:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:07:03.020   14:17:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:03.020   14:17:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59598
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59598 ']'
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59598
00:07:03.020    14:17:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:03.020    14:17:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59598
00:07:03.020  killing process with pid 59598
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59598'
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59598
00:07:03.020   14:17:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59598
00:07:03.278  [2024-11-20 14:17:42.060776] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:07:04.214  
00:07:04.214  real	0m5.780s
00:07:04.214  user	0m10.377s
00:07:04.214  sys	0m0.462s
00:07:04.214  ************************************
00:07:04.214  END TEST event_scheduler
00:07:04.214  ************************************
00:07:04.214   14:17:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:04.214   14:17:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:07:04.214   14:17:43 event -- event/event.sh@51 -- # modprobe -n nbd
00:07:04.214   14:17:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:07:04.214   14:17:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:04.214   14:17:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.214   14:17:43 event -- common/autotest_common.sh@10 -- # set +x
00:07:04.214  ************************************
00:07:04.214  START TEST app_repeat
00:07:04.214  ************************************
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:07:04.214  Process app_repeat pid: 59712
00:07:04.214  spdk_app_start Round 0
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59712
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59712'
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:07:04.214   14:17:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59712 /var/tmp/spdk-nbd.sock
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59712 ']'
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:04.214  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:04.214   14:17:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:04.472  [2024-11-20 14:17:43.201958] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:04.472  [2024-11-20 14:17:43.202264] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59712 ]
00:07:04.472  [2024-11-20 14:17:43.374345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:04.730  [2024-11-20 14:17:43.485389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:04.730  [2024-11-20 14:17:43.485392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.664   14:17:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:05.664   14:17:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:05.664   14:17:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:05.664  Malloc0
00:07:05.922   14:17:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:06.189  Malloc1
00:07:06.189   14:17:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:06.189   14:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:06.190   14:17:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:06.477  /dev/nbd0
00:07:06.477    14:17:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:06.477   14:17:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:06.477  1+0 records in
00:07:06.477  1+0 records out
00:07:06.477  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342791 s, 11.9 MB/s
00:07:06.477    14:17:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:06.477   14:17:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:06.477   14:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:06.477   14:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:06.477   14:17:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:07.044  /dev/nbd1
00:07:07.044    14:17:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:07.044   14:17:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:07.044  1+0 records in
00:07:07.044  1+0 records out
00:07:07.044  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00032242 s, 12.7 MB/s
00:07:07.044    14:17:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:07.044   14:17:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:07.044   14:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:07.044   14:17:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:07.044    14:17:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:07.044    14:17:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:07.044     14:17:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:07.303    14:17:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:07.303    {
00:07:07.303      "nbd_device": "/dev/nbd0",
00:07:07.303      "bdev_name": "Malloc0"
00:07:07.303    },
00:07:07.303    {
00:07:07.303      "nbd_device": "/dev/nbd1",
00:07:07.303      "bdev_name": "Malloc1"
00:07:07.303    }
00:07:07.303  ]'
00:07:07.303     14:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:07.303    {
00:07:07.303      "nbd_device": "/dev/nbd0",
00:07:07.303      "bdev_name": "Malloc0"
00:07:07.303    },
00:07:07.303    {
00:07:07.303      "nbd_device": "/dev/nbd1",
00:07:07.303      "bdev_name": "Malloc1"
00:07:07.303    }
00:07:07.303  ]'
00:07:07.303     14:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:07.303    14:17:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:07.303  /dev/nbd1'
00:07:07.303     14:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:07.303  /dev/nbd1'
00:07:07.303     14:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:07.303    14:17:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:07.303    14:17:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:07.303  256+0 records in
00:07:07.303  256+0 records out
00:07:07.303  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00819066 s, 128 MB/s
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:07.303  256+0 records in
00:07:07.303  256+0 records out
00:07:07.303  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295968 s, 35.4 MB/s
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:07.303   14:17:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:07.561  256+0 records in
00:07:07.561  256+0 records out
00:07:07.562  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355926 s, 29.5 MB/s
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:07.562   14:17:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:07.820    14:17:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:07.820   14:17:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:08.078    14:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:08.078   14:17:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:08.078    14:17:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:08.078    14:17:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:08.078     14:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:08.651    14:17:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:08.651     14:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:08.651     14:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:08.651    14:17:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:08.651     14:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:08.651     14:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:08.651     14:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:08.651    14:17:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:08.651    14:17:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:08.651   14:17:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:08.651   14:17:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:08.651   14:17:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:08.651   14:17:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:09.218   14:17:47 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:10.153  [2024-11-20 14:17:49.003503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:10.153  [2024-11-20 14:17:49.104262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:10.153  [2024-11-20 14:17:49.104275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:10.412  [2024-11-20 14:17:49.272550] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:10.412  [2024-11-20 14:17:49.272675] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:12.313  spdk_app_start Round 1
00:07:12.313   14:17:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:12.313   14:17:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:07:12.313   14:17:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59712 /var/tmp/spdk-nbd.sock
00:07:12.313  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:12.313   14:17:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59712 ']'
00:07:12.313   14:17:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:12.313   14:17:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:12.313   14:17:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:12.313   14:17:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:12.313   14:17:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:12.313   14:17:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:12.313   14:17:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:12.313   14:17:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:12.880  Malloc0
00:07:12.880   14:17:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:13.138  Malloc1
00:07:13.138   14:17:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:13.138   14:17:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:13.397  /dev/nbd0
00:07:13.397    14:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:13.397   14:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:13.397  1+0 records in
00:07:13.397  1+0 records out
00:07:13.397  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284776 s, 14.4 MB/s
00:07:13.397    14:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:13.397   14:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:13.397   14:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:13.397   14:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:13.397   14:17:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:13.656  /dev/nbd1
00:07:13.915    14:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:13.915   14:17:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:13.915  1+0 records in
00:07:13.915  1+0 records out
00:07:13.915  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003959 s, 10.3 MB/s
00:07:13.915    14:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:13.915   14:17:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:13.915   14:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:13.915   14:17:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:13.915    14:17:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:13.915    14:17:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:13.915     14:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:14.174    14:17:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:14.174    {
00:07:14.174      "nbd_device": "/dev/nbd0",
00:07:14.174      "bdev_name": "Malloc0"
00:07:14.174    },
00:07:14.174    {
00:07:14.174      "nbd_device": "/dev/nbd1",
00:07:14.174      "bdev_name": "Malloc1"
00:07:14.174    }
00:07:14.174  ]'
00:07:14.174     14:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:14.174     14:17:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:14.174    {
00:07:14.174      "nbd_device": "/dev/nbd0",
00:07:14.174      "bdev_name": "Malloc0"
00:07:14.174    },
00:07:14.174    {
00:07:14.174      "nbd_device": "/dev/nbd1",
00:07:14.174      "bdev_name": "Malloc1"
00:07:14.174    }
00:07:14.174  ]'
00:07:14.174    14:17:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:14.174  /dev/nbd1'
00:07:14.174     14:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:14.174  /dev/nbd1'
00:07:14.174     14:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:14.174    14:17:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:14.174    14:17:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:14.174  256+0 records in
00:07:14.174  256+0 records out
00:07:14.174  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0061743 s, 170 MB/s
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:14.174  256+0 records in
00:07:14.174  256+0 records out
00:07:14.174  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248869 s, 42.1 MB/s
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:14.174  256+0 records in
00:07:14.174  256+0 records out
00:07:14.174  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315835 s, 33.2 MB/s
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:14.174   14:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:14.740    14:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:14.740   14:17:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:14.998    14:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:14.998   14:17:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:14.998    14:17:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:14.999    14:17:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:14.999     14:17:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:15.256    14:17:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:15.256     14:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:15.256     14:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:15.256    14:17:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:15.256     14:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:15.256     14:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:15.256     14:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:15.256    14:17:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:15.256    14:17:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:15.256   14:17:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:15.256   14:17:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:15.256   14:17:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:15.256   14:17:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:15.821   14:17:54 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:16.753  [2024-11-20 14:17:55.678307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:17.028  [2024-11-20 14:17:55.801584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:17.028  [2024-11-20 14:17:55.801718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:17.287  [2024-11-20 14:17:56.006717] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:17.287  [2024-11-20 14:17:56.006856] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:19.187   14:17:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:07:19.187  spdk_app_start Round 2
00:07:19.187  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:19.187   14:17:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:07:19.187   14:17:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59712 /var/tmp/spdk-nbd.sock
00:07:19.187   14:17:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59712 ']'
00:07:19.187   14:17:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:19.187   14:17:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:19.187   14:17:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:19.187   14:17:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:19.187   14:17:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:19.187   14:17:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:19.187   14:17:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:19.187   14:17:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:19.446  Malloc0
00:07:19.446   14:17:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:07:19.704  Malloc1
00:07:19.963   14:17:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:19.963   14:17:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:07:20.221  /dev/nbd0
00:07:20.221    14:17:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:07:20.221   14:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:20.221  1+0 records in
00:07:20.221  1+0 records out
00:07:20.221  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312265 s, 13.1 MB/s
00:07:20.221    14:17:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:20.221   14:17:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:20.221   14:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:20.221   14:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:20.221   14:17:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:07:20.479  /dev/nbd1
00:07:20.479    14:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:07:20.479   14:17:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:07:20.479  1+0 records in
00:07:20.479  1+0 records out
00:07:20.479  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396391 s, 10.3 MB/s
00:07:20.479    14:17:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:07:20.479   14:17:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:07:20.479   14:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:07:20.479   14:17:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:07:20.479    14:17:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:20.479    14:17:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:20.479     14:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:20.737    14:17:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:07:20.737    {
00:07:20.737      "nbd_device": "/dev/nbd0",
00:07:20.737      "bdev_name": "Malloc0"
00:07:20.737    },
00:07:20.737    {
00:07:20.737      "nbd_device": "/dev/nbd1",
00:07:20.737      "bdev_name": "Malloc1"
00:07:20.737    }
00:07:20.737  ]'
00:07:20.737     14:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:07:20.737    {
00:07:20.737      "nbd_device": "/dev/nbd0",
00:07:20.737      "bdev_name": "Malloc0"
00:07:20.737    },
00:07:20.737    {
00:07:20.737      "nbd_device": "/dev/nbd1",
00:07:20.737      "bdev_name": "Malloc1"
00:07:20.737    }
00:07:20.737  ]'
00:07:20.737     14:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:20.737    14:17:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:07:20.737  /dev/nbd1'
00:07:20.737     14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:20.737     14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:07:20.737  /dev/nbd1'
00:07:20.737    14:17:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:07:20.737    14:17:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:07:20.737  256+0 records in
00:07:20.737  256+0 records out
00:07:20.737  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00790743 s, 133 MB/s
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:20.737   14:17:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:07:20.995  256+0 records in
00:07:20.995  256+0 records out
00:07:20.995  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305669 s, 34.3 MB/s
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:07:20.995  256+0 records in
00:07:20.995  256+0 records out
00:07:20.995  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305028 s, 34.4 MB/s
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:20.995   14:17:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:07:21.252    14:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:07:21.252   14:18:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:07:21.511    14:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:07:21.511   14:18:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:07:21.511    14:18:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:07:21.511    14:18:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:07:21.511     14:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:07:22.079    14:18:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:07:22.079     14:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:07:22.079     14:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:07:22.079    14:18:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:07:22.079     14:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:07:22.079     14:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:07:22.079     14:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:07:22.079    14:18:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:07:22.079    14:18:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:07:22.079   14:18:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:07:22.079   14:18:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:07:22.079   14:18:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:07:22.079   14:18:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:07:22.645   14:18:01 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:07:23.580  [2024-11-20 14:18:02.325056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:23.580  [2024-11-20 14:18:02.424913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:23.580  [2024-11-20 14:18:02.424925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.839  [2024-11-20 14:18:02.590972] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:07:23.839  [2024-11-20 14:18:02.591080] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:07:25.742   14:18:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59712 /var/tmp/spdk-nbd.sock
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59712 ']'
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:25.742  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:07:25.742   14:18:04 event.app_repeat -- event/event.sh@39 -- # killprocess 59712
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59712 ']'
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59712
00:07:25.742    14:18:04 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:25.742    14:18:04 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59712
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:25.742  killing process with pid 59712
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59712'
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59712
00:07:25.742   14:18:04 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59712
00:07:26.679  spdk_app_start is called in Round 0.
00:07:26.679  Shutdown signal received, stop current app iteration
00:07:26.679  Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization...
00:07:26.679  spdk_app_start is called in Round 1.
00:07:26.679  Shutdown signal received, stop current app iteration
00:07:26.679  Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization...
00:07:26.679  spdk_app_start is called in Round 2.
00:07:26.679  Shutdown signal received, stop current app iteration
00:07:26.679  Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 reinitialization...
00:07:26.679  spdk_app_start is called in Round 3.
00:07:26.679  Shutdown signal received, stop current app iteration
00:07:26.679   14:18:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:07:26.679   14:18:05 event.app_repeat -- event/event.sh@42 -- # return 0
00:07:26.679  
00:07:26.679  real	0m22.447s
00:07:26.679  user	0m50.634s
00:07:26.679  sys	0m2.953s
00:07:26.679   14:18:05 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.679   14:18:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:07:26.679  ************************************
00:07:26.679  END TEST app_repeat
00:07:26.679  ************************************
00:07:26.679   14:18:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:07:26.679   14:18:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:26.679   14:18:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.679   14:18:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.679   14:18:05 event -- common/autotest_common.sh@10 -- # set +x
00:07:26.679  ************************************
00:07:26.679  START TEST cpu_locks
00:07:26.679  ************************************
00:07:26.679   14:18:05 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:07:26.938  * Looking for test storage...
00:07:26.938  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:07:26.938     14:18:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:07:26.938     14:18:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:26.938     14:18:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:26.938    14:18:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:07:26.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.938  		--rc genhtml_branch_coverage=1
00:07:26.938  		--rc genhtml_function_coverage=1
00:07:26.938  		--rc genhtml_legend=1
00:07:26.938  		--rc geninfo_all_blocks=1
00:07:26.938  		--rc geninfo_unexecuted_blocks=1
00:07:26.938  		
00:07:26.938  		'
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:07:26.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.938  		--rc genhtml_branch_coverage=1
00:07:26.938  		--rc genhtml_function_coverage=1
00:07:26.938  		--rc genhtml_legend=1
00:07:26.938  		--rc geninfo_all_blocks=1
00:07:26.938  		--rc geninfo_unexecuted_blocks=1
00:07:26.938  		
00:07:26.938  		'
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:07:26.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.938  		--rc genhtml_branch_coverage=1
00:07:26.938  		--rc genhtml_function_coverage=1
00:07:26.938  		--rc genhtml_legend=1
00:07:26.938  		--rc geninfo_all_blocks=1
00:07:26.938  		--rc geninfo_unexecuted_blocks=1
00:07:26.938  		
00:07:26.938  		'
00:07:26.938    14:18:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:07:26.938  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:26.938  		--rc genhtml_branch_coverage=1
00:07:26.938  		--rc genhtml_function_coverage=1
00:07:26.938  		--rc genhtml_legend=1
00:07:26.938  		--rc geninfo_all_blocks=1
00:07:26.938  		--rc geninfo_unexecuted_blocks=1
00:07:26.938  		
00:07:26.938  		'
00:07:26.938   14:18:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:07:26.938   14:18:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:07:26.938   14:18:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:07:26.938   14:18:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:07:26.938   14:18:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.938   14:18:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.938   14:18:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:26.938  ************************************
00:07:26.938  START TEST default_locks
00:07:26.938  ************************************
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60194
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60194
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60194 ']'
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:26.938  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:26.938   14:18:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:26.938  [2024-11-20 14:18:05.916073] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:26.938  [2024-11-20 14:18:05.916221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ]
00:07:27.197  [2024-11-20 14:18:06.088386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:27.455  [2024-11-20 14:18:06.190664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.390   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:28.390   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:07:28.390   14:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60194
00:07:28.390   14:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:28.390   14:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60194
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60194
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60194 ']'
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60194
00:07:28.648    14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:28.648    14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60194
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:28.648  killing process with pid 60194
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60194'
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60194
00:07:28.648   14:18:07 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60194
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60194
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60194
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:31.177    14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60194
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60194 ']'
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:31.177  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:31.177  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60194) - No such process
00:07:31.177  ERROR: process (pid: 60194) is no longer running
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:31.177  
00:07:31.177  real	0m3.914s
00:07:31.177  user	0m4.125s
00:07:31.177  sys	0m0.683s
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:31.177   14:18:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:07:31.177  ************************************
00:07:31.177  END TEST default_locks
00:07:31.177  ************************************
00:07:31.177   14:18:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:07:31.177   14:18:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:31.177   14:18:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:31.177   14:18:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:31.177  ************************************
00:07:31.177  START TEST default_locks_via_rpc
00:07:31.177  ************************************
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60269
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60269
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60269 ']'
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:31.177  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:31.177   14:18:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:31.177  [2024-11-20 14:18:09.917135] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:31.177  [2024-11-20 14:18:09.917312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60269 ]
00:07:31.177  [2024-11-20 14:18:10.145171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:31.436  [2024-11-20 14:18:10.307686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60269
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60269
00:07:32.371   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60269
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60269 ']'
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60269
00:07:33.003    14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:33.003    14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60269
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:33.003  killing process with pid 60269
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60269'
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60269
00:07:33.003   14:18:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60269
00:07:35.534  
00:07:35.534  real	0m4.138s
00:07:35.534  user	0m4.365s
00:07:35.534  sys	0m0.707s
00:07:35.534   14:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:35.534   14:18:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:35.534  ************************************
00:07:35.534  END TEST default_locks_via_rpc
00:07:35.534  ************************************
00:07:35.534   14:18:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:07:35.534   14:18:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:35.534   14:18:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:35.534   14:18:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:35.534  ************************************
00:07:35.534  START TEST non_locking_app_on_locked_coremask
00:07:35.534  ************************************
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60343
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60343 /var/tmp/spdk.sock
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60343 ']'
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:35.534  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:35.534   14:18:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:35.534  [2024-11-20 14:18:14.058619] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:35.535  [2024-11-20 14:18:14.058777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ]
00:07:35.535  [2024-11-20 14:18:14.238914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:35.535  [2024-11-20 14:18:14.455276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60359
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60359 /var/tmp/spdk2.sock
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60359 ']'
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:36.469  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:36.469   14:18:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:36.469  [2024-11-20 14:18:15.442830] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:36.469  [2024-11-20 14:18:15.442979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60359 ]
00:07:36.727  [2024-11-20 14:18:15.638941] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:36.727  [2024-11-20 14:18:15.639019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:36.985  [2024-11-20 14:18:15.878860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:39.515   14:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:39.515   14:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:39.515   14:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60343
00:07:39.515   14:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60343
00:07:39.515   14:18:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60343
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60343 ']'
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60343
00:07:40.082    14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:40.082    14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60343
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:40.082  killing process with pid 60343
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60343'
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60343
00:07:40.082   14:18:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60343
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60359
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60359 ']'
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60359
00:07:45.395    14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:45.395    14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60359
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:45.395  killing process with pid 60359
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60359'
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60359
00:07:45.395   14:18:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60359
00:07:46.772  
00:07:46.772  real	0m11.684s
00:07:46.772  user	0m12.523s
00:07:46.772  sys	0m1.164s
00:07:46.773   14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:46.773   14:18:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:46.773  ************************************
00:07:46.773  END TEST non_locking_app_on_locked_coremask
00:07:46.773  ************************************
00:07:46.773   14:18:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:07:46.773   14:18:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:46.773   14:18:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:46.773   14:18:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:46.773  ************************************
00:07:46.773  START TEST locking_app_on_unlocked_coremask
00:07:46.773  ************************************
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60507
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60507 /var/tmp/spdk.sock
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60507 ']'
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:46.773  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:46.773   14:18:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:47.033  [2024-11-20 14:18:25.798082] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:47.033  [2024-11-20 14:18:25.798273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60507 ]
00:07:47.033  [2024-11-20 14:18:25.978302] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:47.033  [2024-11-20 14:18:25.978557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:47.333  [2024-11-20 14:18:26.083932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60523
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60523 /var/tmp/spdk2.sock
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60523 ']'
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:47.917  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:47.917   14:18:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:48.176  [2024-11-20 14:18:27.006258] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:48.176  [2024-11-20 14:18:27.006463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60523 ]
00:07:48.434  [2024-11-20 14:18:27.209414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:48.692  [2024-11-20 14:18:27.424742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:50.065   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:50.065   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:50.065   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60523
00:07:50.065   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60523
00:07:50.065   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:51.441   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60507
00:07:51.441   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60507 ']'
00:07:51.441   14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60507
00:07:51.441    14:18:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:51.441   14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:51.441    14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60507
00:07:51.441   14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:51.441   14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:51.441  killing process with pid 60507
00:07:51.441   14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60507'
00:07:51.441   14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60507
00:07:51.441   14:18:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60507
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60523
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60523 ']'
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60523
00:07:56.708    14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:07:56.708    14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60523
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:07:56.708  killing process with pid 60523
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60523'
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60523
00:07:56.708   14:18:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60523
00:07:58.083  
00:07:58.083  real	0m11.287s
00:07:58.083  user	0m11.939s
00:07:58.083  sys	0m1.284s
00:07:58.083   14:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:58.083   14:18:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:58.083  ************************************
00:07:58.083  END TEST locking_app_on_unlocked_coremask
00:07:58.083  ************************************
00:07:58.083   14:18:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:07:58.083   14:18:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:58.083   14:18:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:58.083   14:18:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:07:58.083  ************************************
00:07:58.083  START TEST locking_app_on_locked_coremask
00:07:58.083  ************************************
00:07:58.083   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:07:58.083   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60669
00:07:58.083   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:07:58.083   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60669 /var/tmp/spdk.sock
00:07:58.083   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60669 ']'
00:07:58.084   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:58.084   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:58.084  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:58.084   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:58.084   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:58.084   14:18:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:58.342  [2024-11-20 14:18:37.153123] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:58.342  [2024-11-20 14:18:37.153325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60669 ]
00:07:58.600  [2024-11-20 14:18:37.340593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:58.600  [2024-11-20 14:18:37.452019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60691
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60691 /var/tmp/spdk2.sock
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60691 /var/tmp/spdk2.sock
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:59.535    14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60691 /var/tmp/spdk2.sock
00:07:59.535   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60691 ']'
00:07:59.536   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:59.536  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:59.536   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:59.536   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:59.536   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:59.536   14:18:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:07:59.536  [2024-11-20 14:18:38.388101] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:07:59.536  [2024-11-20 14:18:38.388299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60691 ]
00:07:59.794  [2024-11-20 14:18:38.607097] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60669 has claimed it.
00:07:59.794  [2024-11-20 14:18:38.607235] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:00.363  ERROR: process (pid: 60691) is no longer running
00:08:00.363  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60691) - No such process
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60669
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60669
00:08:00.363   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60669
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60669 ']'
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60669
00:08:00.624    14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:00.624    14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60669
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:00.624  killing process with pid 60669
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60669'
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60669
00:08:00.624   14:18:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60669
00:08:03.153  
00:08:03.153  real	0m4.633s
00:08:03.153  user	0m5.143s
00:08:03.153  sys	0m0.814s
00:08:03.153   14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:03.153   14:18:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:03.153  ************************************
00:08:03.153  END TEST locking_app_on_locked_coremask
00:08:03.153  ************************************
00:08:03.153   14:18:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:08:03.153   14:18:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:03.153   14:18:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:03.153   14:18:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:03.153  ************************************
00:08:03.153  START TEST locking_overlapped_coremask
00:08:03.153  ************************************
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60755
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60755 /var/tmp/spdk.sock
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60755 ']'
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:03.153  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:03.153   14:18:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:03.153  [2024-11-20 14:18:41.869918] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:03.153  [2024-11-20 14:18:41.870067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60755 ]
00:08:03.153  [2024-11-20 14:18:42.063275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:03.412  [2024-11-20 14:18:42.174259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:03.412  [2024-11-20 14:18:42.174363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:03.412  [2024-11-20 14:18:42.174366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60773
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60773 /var/tmp/spdk2.sock
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60773 /var/tmp/spdk2.sock
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:03.980    14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60773 /var/tmp/spdk2.sock
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60773 ']'
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:03.980  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:03.980   14:18:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:04.237  [2024-11-20 14:18:43.075040] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:04.238  [2024-11-20 14:18:43.075616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60773 ]
00:08:04.495  [2024-11-20 14:18:43.291800] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60755 has claimed it.
00:08:04.495  [2024-11-20 14:18:43.295643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:08:04.754  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60773) - No such process
00:08:04.754  ERROR: process (pid: 60773) is no longer running
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60755
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60755 ']'
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60755
00:08:04.754    14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:04.754    14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60755
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60755'
00:08:04.754  killing process with pid 60755
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60755
00:08:04.754   14:18:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60755
00:08:07.328  ************************************
00:08:07.328  END TEST locking_overlapped_coremask
00:08:07.328  ************************************
00:08:07.328  
00:08:07.328  real	0m4.405s
00:08:07.328  user	0m11.981s
00:08:07.328  sys	0m0.607s
00:08:07.328   14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:07.328   14:18:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:08:07.328   14:18:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:08:07.328   14:18:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:07.328   14:18:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:07.328   14:18:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:07.328  ************************************
00:08:07.328  START TEST locking_overlapped_coremask_via_rpc
00:08:07.329  ************************************
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60837
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60837 /var/tmp/spdk.sock
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60837 ']'
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:07.329  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:07.329   14:18:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:07.329  [2024-11-20 14:18:46.254173] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:07.329  [2024-11-20 14:18:46.254340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60837 ]
00:08:07.587  [2024-11-20 14:18:46.455229] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:07.587  [2024-11-20 14:18:46.455354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:07.845  [2024-11-20 14:18:46.579908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:07.845  [2024-11-20 14:18:46.579980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:07.845  [2024-11-20 14:18:46.579982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60866
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60866 /var/tmp/spdk2.sock
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60866 ']'
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:08.779  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:08.779   14:18:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:08.779  [2024-11-20 14:18:47.621343] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:08.779  [2024-11-20 14:18:47.621627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60866 ]
00:08:09.038  [2024-11-20 14:18:47.831460] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:08:09.038  [2024-11-20 14:18:47.831535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:09.295  [2024-11-20 14:18:48.061898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:08:09.295  [2024-11-20 14:18:48.065677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:09.295  [2024-11-20 14:18:48.065688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:11.826    14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:11.826  [2024-11-20 14:18:50.493978] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60837 has claimed it.
00:08:11.826  request:
00:08:11.826  {
00:08:11.826  "method": "framework_enable_cpumask_locks",
00:08:11.826  "req_id": 1
00:08:11.826  }
00:08:11.826  Got JSON-RPC error response
00:08:11.826  response:
00:08:11.826  {
00:08:11.826  "code": -32603,
00:08:11.826  "message": "Failed to claim CPU core: 2"
00:08:11.826  }
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:11.826   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60837 /var/tmp/spdk.sock
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60837 ']'
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:11.827  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:11.827   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60866 /var/tmp/spdk2.sock
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60866 ']'
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:12.085  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:12.085   14:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:08:12.342  
00:08:12.342  real	0m5.098s
00:08:12.342  user	0m2.046s
00:08:12.342  sys	0m0.232s
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:12.342   14:18:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:12.342  ************************************
00:08:12.342  END TEST locking_overlapped_coremask_via_rpc
00:08:12.342  ************************************
00:08:12.342   14:18:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:08:12.342   14:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60837 ]]
00:08:12.342   14:18:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60837
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60837 ']'
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60837
00:08:12.342    14:18:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:12.342    14:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60837
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60837'
00:08:12.342  killing process with pid 60837
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60837
00:08:12.342   14:18:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60837
00:08:14.868   14:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60866 ]]
00:08:14.868   14:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60866
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60866 ']'
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60866
00:08:14.868    14:18:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:14.868    14:18:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60866
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:08:14.868  killing process with pid 60866
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60866'
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60866
00:08:14.868   14:18:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60866
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60837 ]]
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60837
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60837 ']'
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60837
00:08:17.397  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60837) - No such process
00:08:17.397  Process with pid 60837 is not found
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60837 is not found'
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60866 ]]
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60866
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60866 ']'
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60866
00:08:17.397  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60866) - No such process
00:08:17.397  Process with pid 60866 is not found
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60866 is not found'
00:08:17.397   14:18:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:08:17.397  
00:08:17.397  real	0m50.347s
00:08:17.397  user	1m30.091s
00:08:17.397  sys	0m6.469s
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:17.397   14:18:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:08:17.397  ************************************
00:08:17.397  END TEST cpu_locks
00:08:17.397  ************************************
00:08:17.397  
00:08:17.397  real	1m23.785s
00:08:17.397  user	2m38.436s
00:08:17.397  sys	0m10.396s
00:08:17.397   14:18:56 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:17.397   14:18:56 event -- common/autotest_common.sh@10 -- # set +x
00:08:17.397  ************************************
00:08:17.397  END TEST event
00:08:17.397  ************************************
00:08:17.397   14:18:56  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:08:17.397   14:18:56  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:17.397   14:18:56  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:17.397   14:18:56  -- common/autotest_common.sh@10 -- # set +x
00:08:17.397  ************************************
00:08:17.397  START TEST thread
00:08:17.397  ************************************
00:08:17.397   14:18:56 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:08:17.397  * Looking for test storage...
00:08:17.397  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:17.397     14:18:56 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:08:17.397     14:18:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:17.397    14:18:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:17.397    14:18:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:17.397    14:18:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:17.397    14:18:56 thread -- scripts/common.sh@336 -- # IFS=.-:
00:08:17.397    14:18:56 thread -- scripts/common.sh@336 -- # read -ra ver1
00:08:17.397    14:18:56 thread -- scripts/common.sh@337 -- # IFS=.-:
00:08:17.397    14:18:56 thread -- scripts/common.sh@337 -- # read -ra ver2
00:08:17.397    14:18:56 thread -- scripts/common.sh@338 -- # local 'op=<'
00:08:17.397    14:18:56 thread -- scripts/common.sh@340 -- # ver1_l=2
00:08:17.397    14:18:56 thread -- scripts/common.sh@341 -- # ver2_l=1
00:08:17.397    14:18:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:17.397    14:18:56 thread -- scripts/common.sh@344 -- # case "$op" in
00:08:17.397    14:18:56 thread -- scripts/common.sh@345 -- # : 1
00:08:17.397    14:18:56 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:17.397    14:18:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:17.397     14:18:56 thread -- scripts/common.sh@365 -- # decimal 1
00:08:17.397     14:18:56 thread -- scripts/common.sh@353 -- # local d=1
00:08:17.397     14:18:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:17.397     14:18:56 thread -- scripts/common.sh@355 -- # echo 1
00:08:17.397    14:18:56 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:08:17.397     14:18:56 thread -- scripts/common.sh@366 -- # decimal 2
00:08:17.397     14:18:56 thread -- scripts/common.sh@353 -- # local d=2
00:08:17.397     14:18:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:17.397     14:18:56 thread -- scripts/common.sh@355 -- # echo 2
00:08:17.397    14:18:56 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:08:17.397    14:18:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:17.397    14:18:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:17.397    14:18:56 thread -- scripts/common.sh@368 -- # return 0
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:17.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:17.397  		--rc genhtml_branch_coverage=1
00:08:17.397  		--rc genhtml_function_coverage=1
00:08:17.397  		--rc genhtml_legend=1
00:08:17.397  		--rc geninfo_all_blocks=1
00:08:17.397  		--rc geninfo_unexecuted_blocks=1
00:08:17.397  		
00:08:17.397  		'
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:17.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:17.397  		--rc genhtml_branch_coverage=1
00:08:17.397  		--rc genhtml_function_coverage=1
00:08:17.397  		--rc genhtml_legend=1
00:08:17.397  		--rc geninfo_all_blocks=1
00:08:17.397  		--rc geninfo_unexecuted_blocks=1
00:08:17.397  		
00:08:17.397  		'
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:17.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:17.397  		--rc genhtml_branch_coverage=1
00:08:17.397  		--rc genhtml_function_coverage=1
00:08:17.397  		--rc genhtml_legend=1
00:08:17.397  		--rc geninfo_all_blocks=1
00:08:17.397  		--rc geninfo_unexecuted_blocks=1
00:08:17.397  		
00:08:17.397  		'
00:08:17.397    14:18:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:17.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:17.397  		--rc genhtml_branch_coverage=1
00:08:17.397  		--rc genhtml_function_coverage=1
00:08:17.397  		--rc genhtml_legend=1
00:08:17.397  		--rc geninfo_all_blocks=1
00:08:17.397  		--rc geninfo_unexecuted_blocks=1
00:08:17.397  		
00:08:17.397  		'
00:08:17.397   14:18:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:17.397   14:18:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:08:17.398   14:18:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:17.398   14:18:56 thread -- common/autotest_common.sh@10 -- # set +x
00:08:17.398  ************************************
00:08:17.398  START TEST thread_poller_perf
00:08:17.398  ************************************
00:08:17.398   14:18:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:08:17.398  [2024-11-20 14:18:56.297161] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:17.398  [2024-11-20 14:18:56.297389] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61061 ]
00:08:17.656  [2024-11-20 14:18:56.493080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:17.656  [2024-11-20 14:18:56.600604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.656  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:08:19.029  
[2024-11-20T14:18:58.011Z]  ======================================
00:08:19.029  
[2024-11-20T14:18:58.011Z]  busy:2210344837 (cyc)
00:08:19.029  
[2024-11-20T14:18:58.011Z]  total_run_count: 286000
00:08:19.029  
[2024-11-20T14:18:58.011Z]  tsc_hz: 2200000000 (cyc)
00:08:19.029  
[2024-11-20T14:18:58.011Z]  ======================================
00:08:19.029  
[2024-11-20T14:18:58.011Z]  poller_cost: 7728 (cyc), 3512 (nsec)
00:08:19.029  
00:08:19.029  real	0m1.595s
00:08:19.029  user	0m1.370s
00:08:19.029  sys	0m0.114s
00:08:19.029   14:18:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:19.029   14:18:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:08:19.029  ************************************
00:08:19.029  END TEST thread_poller_perf
00:08:19.029  ************************************
00:08:19.029   14:18:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:19.029   14:18:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:08:19.029   14:18:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:19.029   14:18:57 thread -- common/autotest_common.sh@10 -- # set +x
00:08:19.029  ************************************
00:08:19.029  START TEST thread_poller_perf
00:08:19.029  ************************************
00:08:19.029   14:18:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:08:19.029  [2024-11-20 14:18:57.932278] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:19.029  [2024-11-20 14:18:57.932451] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ]
00:08:19.287  [2024-11-20 14:18:58.108651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:19.287  [2024-11-20 14:18:58.211901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:19.287  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:08:20.661  
[2024-11-20T14:18:59.643Z]  ======================================
00:08:20.661  
[2024-11-20T14:18:59.643Z]  busy:2203869081 (cyc)
00:08:20.661  
[2024-11-20T14:18:59.643Z]  total_run_count: 3288000
00:08:20.661  
[2024-11-20T14:18:59.643Z]  tsc_hz: 2200000000 (cyc)
00:08:20.661  
[2024-11-20T14:18:59.643Z]  ======================================
00:08:20.661  
[2024-11-20T14:18:59.643Z]  poller_cost: 670 (cyc), 304 (nsec)
00:08:20.661  
00:08:20.661  real	0m1.555s
00:08:20.661  user	0m1.346s
00:08:20.661  sys	0m0.099s
00:08:20.661   14:18:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:20.661   14:18:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:08:20.661  ************************************
00:08:20.661  END TEST thread_poller_perf
00:08:20.661  ************************************
00:08:20.662   14:18:59 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:08:20.662  
00:08:20.662  real	0m3.412s
00:08:20.662  user	0m2.857s
00:08:20.662  sys	0m0.334s
00:08:20.662   14:18:59 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:20.662   14:18:59 thread -- common/autotest_common.sh@10 -- # set +x
00:08:20.662  ************************************
00:08:20.662  END TEST thread
00:08:20.662  ************************************
00:08:20.662   14:18:59  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:08:20.662   14:18:59  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:08:20.662   14:18:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:20.662   14:18:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:20.662   14:18:59  -- common/autotest_common.sh@10 -- # set +x
00:08:20.662  ************************************
00:08:20.662  START TEST app_cmdline
00:08:20.662  ************************************
00:08:20.662   14:18:59 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:08:20.662  * Looking for test storage...
00:08:20.662  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:08:20.662    14:18:59 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:20.662     14:18:59 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:08:20.662     14:18:59 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:20.921    14:18:59 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@345 -- # : 1
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.921     14:18:59 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:20.921    14:18:59 app_cmdline -- scripts/common.sh@368 -- # return 0
00:08:20.921    14:18:59 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:20.921    14:18:59 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:20.921  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.921  		--rc genhtml_branch_coverage=1
00:08:20.921  		--rc genhtml_function_coverage=1
00:08:20.921  		--rc genhtml_legend=1
00:08:20.921  		--rc geninfo_all_blocks=1
00:08:20.921  		--rc geninfo_unexecuted_blocks=1
00:08:20.921  		
00:08:20.921  		'
00:08:20.921    14:18:59 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:20.921  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.921  		--rc genhtml_branch_coverage=1
00:08:20.921  		--rc genhtml_function_coverage=1
00:08:20.921  		--rc genhtml_legend=1
00:08:20.921  		--rc geninfo_all_blocks=1
00:08:20.921  		--rc geninfo_unexecuted_blocks=1
00:08:20.921  		
00:08:20.921  		'
00:08:20.922    14:18:59 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:20.922  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.922  		--rc genhtml_branch_coverage=1
00:08:20.922  		--rc genhtml_function_coverage=1
00:08:20.922  		--rc genhtml_legend=1
00:08:20.922  		--rc geninfo_all_blocks=1
00:08:20.922  		--rc geninfo_unexecuted_blocks=1
00:08:20.922  		
00:08:20.922  		'
00:08:20.922    14:18:59 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:20.922  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.922  		--rc genhtml_branch_coverage=1
00:08:20.922  		--rc genhtml_function_coverage=1
00:08:20.922  		--rc genhtml_legend=1
00:08:20.922  		--rc geninfo_all_blocks=1
00:08:20.922  		--rc geninfo_unexecuted_blocks=1
00:08:20.922  		
00:08:20.922  		'
00:08:20.922   14:18:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:08:20.922   14:18:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61187
00:08:20.922   14:18:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61187
00:08:20.922   14:18:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:08:20.922   14:18:59 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61187 ']'
00:08:20.922   14:18:59 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:20.922   14:18:59 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:20.922  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:20.922   14:18:59 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:20.922   14:18:59 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:20.922   14:18:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:20.922  [2024-11-20 14:18:59.845866] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:20.922  [2024-11-20 14:18:59.846095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61187 ]
00:08:21.181  [2024-11-20 14:19:00.041628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:21.439  [2024-11-20 14:19:00.223520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:22.374   14:19:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:22.374   14:19:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:08:22.374   14:19:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:08:22.632  {
00:08:22.632    "version": "SPDK v25.01-pre git sha1 5c8d99223",
00:08:22.632    "fields": {
00:08:22.632      "major": 25,
00:08:22.632      "minor": 1,
00:08:22.632      "patch": 0,
00:08:22.632      "suffix": "-pre",
00:08:22.632      "commit": "5c8d99223"
00:08:22.632    }
00:08:22.632  }
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:08:22.632    14:19:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:08:22.632    14:19:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:08:22.632    14:19:01 app_cmdline -- app/cmdline.sh@26 -- # sort
00:08:22.632    14:19:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:22.632    14:19:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:22.632    14:19:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:08:22.632   14:19:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:22.632    14:19:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:22.632    14:19:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:08:22.632   14:19:01 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:22.889  request:
00:08:22.889  {
00:08:22.889    "method": "env_dpdk_get_mem_stats",
00:08:22.889    "req_id": 1
00:08:22.889  }
00:08:22.889  Got JSON-RPC error response
00:08:22.889  response:
00:08:22.889  {
00:08:22.889    "code": -32601,
00:08:22.889    "message": "Method not found"
00:08:22.889  }
00:08:22.889   14:19:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:08:22.889   14:19:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:22.890   14:19:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61187
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61187 ']'
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61187
00:08:22.890    14:19:01 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:22.890    14:19:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61187
00:08:22.890  killing process with pid 61187
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61187'
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 61187
00:08:22.890   14:19:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 61187
00:08:25.417  
00:08:25.417  real	0m4.581s
00:08:25.417  user	0m5.264s
00:08:25.417  sys	0m0.595s
00:08:25.417   14:19:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:25.417   14:19:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:08:25.417  ************************************
00:08:25.417  END TEST app_cmdline
00:08:25.417  ************************************
00:08:25.417   14:19:04  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:08:25.417   14:19:04  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:25.417   14:19:04  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:25.417   14:19:04  -- common/autotest_common.sh@10 -- # set +x
00:08:25.417  ************************************
00:08:25.417  START TEST version
00:08:25.417  ************************************
00:08:25.417   14:19:04 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:08:25.417  * Looking for test storage...
00:08:25.417  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:25.417     14:19:04 version -- common/autotest_common.sh@1693 -- # lcov --version
00:08:25.417     14:19:04 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:25.417    14:19:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:25.417    14:19:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:25.417    14:19:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:25.417    14:19:04 version -- scripts/common.sh@336 -- # IFS=.-:
00:08:25.417    14:19:04 version -- scripts/common.sh@336 -- # read -ra ver1
00:08:25.417    14:19:04 version -- scripts/common.sh@337 -- # IFS=.-:
00:08:25.417    14:19:04 version -- scripts/common.sh@337 -- # read -ra ver2
00:08:25.417    14:19:04 version -- scripts/common.sh@338 -- # local 'op=<'
00:08:25.417    14:19:04 version -- scripts/common.sh@340 -- # ver1_l=2
00:08:25.417    14:19:04 version -- scripts/common.sh@341 -- # ver2_l=1
00:08:25.417    14:19:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:25.417    14:19:04 version -- scripts/common.sh@344 -- # case "$op" in
00:08:25.417    14:19:04 version -- scripts/common.sh@345 -- # : 1
00:08:25.417    14:19:04 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:25.417    14:19:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:25.417     14:19:04 version -- scripts/common.sh@365 -- # decimal 1
00:08:25.417     14:19:04 version -- scripts/common.sh@353 -- # local d=1
00:08:25.417     14:19:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:25.417     14:19:04 version -- scripts/common.sh@355 -- # echo 1
00:08:25.417    14:19:04 version -- scripts/common.sh@365 -- # ver1[v]=1
00:08:25.417     14:19:04 version -- scripts/common.sh@366 -- # decimal 2
00:08:25.417     14:19:04 version -- scripts/common.sh@353 -- # local d=2
00:08:25.417     14:19:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:25.417     14:19:04 version -- scripts/common.sh@355 -- # echo 2
00:08:25.417    14:19:04 version -- scripts/common.sh@366 -- # ver2[v]=2
00:08:25.417    14:19:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:25.417    14:19:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:25.417    14:19:04 version -- scripts/common.sh@368 -- # return 0
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:25.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.417  		--rc genhtml_branch_coverage=1
00:08:25.417  		--rc genhtml_function_coverage=1
00:08:25.417  		--rc genhtml_legend=1
00:08:25.417  		--rc geninfo_all_blocks=1
00:08:25.417  		--rc geninfo_unexecuted_blocks=1
00:08:25.417  		
00:08:25.417  		'
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:25.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.417  		--rc genhtml_branch_coverage=1
00:08:25.417  		--rc genhtml_function_coverage=1
00:08:25.417  		--rc genhtml_legend=1
00:08:25.417  		--rc geninfo_all_blocks=1
00:08:25.417  		--rc geninfo_unexecuted_blocks=1
00:08:25.417  		
00:08:25.417  		'
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:25.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.417  		--rc genhtml_branch_coverage=1
00:08:25.417  		--rc genhtml_function_coverage=1
00:08:25.417  		--rc genhtml_legend=1
00:08:25.417  		--rc geninfo_all_blocks=1
00:08:25.417  		--rc geninfo_unexecuted_blocks=1
00:08:25.417  		
00:08:25.417  		'
00:08:25.417    14:19:04 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:25.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.417  		--rc genhtml_branch_coverage=1
00:08:25.417  		--rc genhtml_function_coverage=1
00:08:25.417  		--rc genhtml_legend=1
00:08:25.417  		--rc geninfo_all_blocks=1
00:08:25.417  		--rc geninfo_unexecuted_blocks=1
00:08:25.417  		
00:08:25.417  		'
00:08:25.417    14:19:04 version -- app/version.sh@17 -- # get_header_version major
00:08:25.417    14:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # cut -f2
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # tr -d '"'
00:08:25.417   14:19:04 version -- app/version.sh@17 -- # major=25
00:08:25.417    14:19:04 version -- app/version.sh@18 -- # get_header_version minor
00:08:25.417    14:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # cut -f2
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # tr -d '"'
00:08:25.417   14:19:04 version -- app/version.sh@18 -- # minor=1
00:08:25.417    14:19:04 version -- app/version.sh@19 -- # get_header_version patch
00:08:25.417    14:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # cut -f2
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # tr -d '"'
00:08:25.417   14:19:04 version -- app/version.sh@19 -- # patch=0
00:08:25.417    14:19:04 version -- app/version.sh@20 -- # get_header_version suffix
00:08:25.417    14:19:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # cut -f2
00:08:25.417    14:19:04 version -- app/version.sh@14 -- # tr -d '"'
00:08:25.417   14:19:04 version -- app/version.sh@20 -- # suffix=-pre
00:08:25.417   14:19:04 version -- app/version.sh@22 -- # version=25.1
00:08:25.417   14:19:04 version -- app/version.sh@25 -- # (( patch != 0 ))
00:08:25.417   14:19:04 version -- app/version.sh@28 -- # version=25.1rc0
00:08:25.417   14:19:04 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:08:25.417    14:19:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:08:25.417   14:19:04 version -- app/version.sh@30 -- # py_version=25.1rc0
00:08:25.417   14:19:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:08:25.417  
00:08:25.417  real	0m0.225s
00:08:25.417  user	0m0.151s
00:08:25.417  sys	0m0.105s
00:08:25.417   14:19:04 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:25.417   14:19:04 version -- common/autotest_common.sh@10 -- # set +x
00:08:25.417  ************************************
00:08:25.417  END TEST version
00:08:25.417  ************************************
00:08:25.417   14:19:04  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:08:25.417   14:19:04  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:08:25.417    14:19:04  -- spdk/autotest.sh@194 -- # uname -s
00:08:25.676   14:19:04  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:08:25.676   14:19:04  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:08:25.676   14:19:04  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:08:25.676   14:19:04  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:08:25.676   14:19:04  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:08:25.676   14:19:04  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:25.676   14:19:04  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:25.676   14:19:04  -- common/autotest_common.sh@10 -- # set +x
00:08:25.676  ************************************
00:08:25.676  START TEST blockdev_nvme
00:08:25.676  ************************************
00:08:25.676   14:19:04 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:08:25.676  * Looking for test storage...
00:08:25.676  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:08:25.676    14:19:04 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:25.676     14:19:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:08:25.676     14:19:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:25.676    14:19:04 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:25.676     14:19:04 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:25.676    14:19:04 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:08:25.676    14:19:04 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:25.676    14:19:04 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:25.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.676  		--rc genhtml_branch_coverage=1
00:08:25.676  		--rc genhtml_function_coverage=1
00:08:25.676  		--rc genhtml_legend=1
00:08:25.676  		--rc geninfo_all_blocks=1
00:08:25.676  		--rc geninfo_unexecuted_blocks=1
00:08:25.676  		
00:08:25.676  		'
00:08:25.676    14:19:04 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:25.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.676  		--rc genhtml_branch_coverage=1
00:08:25.676  		--rc genhtml_function_coverage=1
00:08:25.676  		--rc genhtml_legend=1
00:08:25.676  		--rc geninfo_all_blocks=1
00:08:25.676  		--rc geninfo_unexecuted_blocks=1
00:08:25.676  		
00:08:25.676  		'
00:08:25.676    14:19:04 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:25.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.676  		--rc genhtml_branch_coverage=1
00:08:25.677  		--rc genhtml_function_coverage=1
00:08:25.677  		--rc genhtml_legend=1
00:08:25.677  		--rc geninfo_all_blocks=1
00:08:25.677  		--rc geninfo_unexecuted_blocks=1
00:08:25.677  		
00:08:25.677  		'
00:08:25.677    14:19:04 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:25.677  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:25.677  		--rc genhtml_branch_coverage=1
00:08:25.677  		--rc genhtml_function_coverage=1
00:08:25.677  		--rc genhtml_legend=1
00:08:25.677  		--rc geninfo_all_blocks=1
00:08:25.677  		--rc geninfo_unexecuted_blocks=1
00:08:25.677  		
00:08:25.677  		'
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:08:25.677    14:19:04 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:08:25.677    14:19:04 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device=
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek=
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx=
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]]
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]]
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61380
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:08:25.677  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:25.677   14:19:04 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61380
00:08:25.677   14:19:04 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61380 ']'
00:08:25.677   14:19:04 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:25.677   14:19:04 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:25.677   14:19:04 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:25.677   14:19:04 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:25.677   14:19:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:25.934  [2024-11-20 14:19:04.677491] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:25.935  [2024-11-20 14:19:04.677902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61380 ]
00:08:25.935  [2024-11-20 14:19:04.865534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:26.248  [2024-11-20 14:19:04.969802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:26.815   14:19:05 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:26.815   14:19:05 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:08:26.815   14:19:05 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:08:26.815   14:19:05 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf
00:08:26.815   14:19:05 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:08:26.815   14:19:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:08:26.815    14:19:05 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:08:27.073   14:19:05 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\'''
00:08:27.073   14:19:05 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:27.073   14:19:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:27.332   14:19:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:27.332   14:19:06 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:08:27.332   14:19:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:27.332   14:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:27.332   14:19:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:27.332   14:19:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat
00:08:27.332    14:19:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:27.332    14:19:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:27.332    14:19:06 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:27.332    14:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:27.333    14:19:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:27.333   14:19:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:08:27.333    14:19:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:08:27.333    14:19:06 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:08:27.333    14:19:06 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:27.333    14:19:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:27.333    14:19:06 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:27.333   14:19:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:08:27.333    14:19:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name
00:08:27.333    14:19:06 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "c1eb1cec-38da-4664-8572-35ba2f8472d7"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "c1eb1cec-38da-4664-8572-35ba2f8472d7",' '  "numa_id": -1,' '  "md_size": 64,' '  "md_interleave": false,' '  "dif_type": 0,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": true,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1",' '  "aliases": [' '    "2893d855-933e-460e-9e9a-3df0560bdf7f"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "2893d855-933e-460e-9e9a-3df0560bdf7f",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:11.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:11.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12341",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12341",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n1",' '  "aliases": [' '    "ba72257b-6eaf-49c2-9543-6ab8d59aa8a5"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "ba72257b-6eaf-49c2-9543-6ab8d59aa8a5",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n2",' '  "aliases": [' '    "9d4162b0-59df-42c6-b2e6-deb988e85a0e"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "9d4162b0-59df-42c6-b2e6-deb988e85a0e",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 2,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n3",' '  "aliases": [' '    "66868fd8-2409-4aca-87bb-f968ed4dc562"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "66868fd8-2409-4aca-87bb-f968ed4dc562",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 3,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme3n1",' '  "aliases": [' '    "61be8fa1-0df3-43ef-96eb-2d249b9e7e43"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "61be8fa1-0df3-43ef-96eb-2d249b9e7e43",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:13.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:13.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12343",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": true,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": true' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:08:27.333   14:19:06 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:08:27.333   14:19:06 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1
00:08:27.333   14:19:06 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:08:27.333   14:19:06 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61380
00:08:27.333   14:19:06 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61380 ']'
00:08:27.333   14:19:06 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61380
00:08:27.333    14:19:06 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:08:27.333   14:19:06 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:27.333    14:19:06 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61380
00:08:27.592   14:19:06 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:27.592   14:19:06 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:27.592  killing process with pid 61380
00:08:27.592   14:19:06 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61380'
00:08:27.592   14:19:06 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61380
00:08:27.592   14:19:06 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61380
00:08:30.123   14:19:08 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:08:30.123   14:19:08 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:08:30.123   14:19:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:08:30.123   14:19:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:30.123   14:19:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:30.123  ************************************
00:08:30.123  START TEST bdev_hello_world
00:08:30.123  ************************************
00:08:30.123   14:19:08 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:08:30.123  [2024-11-20 14:19:08.596549] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:30.123  [2024-11-20 14:19:08.596788] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61465 ]
00:08:30.123  [2024-11-20 14:19:08.784790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:30.123  [2024-11-20 14:19:08.909164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:30.690  [2024-11-20 14:19:09.573780] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:08:30.690  [2024-11-20 14:19:09.573834] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:08:30.690  [2024-11-20 14:19:09.573864] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:08:30.690  [2024-11-20 14:19:09.576950] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:08:30.690  [2024-11-20 14:19:09.577369] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:08:30.690  [2024-11-20 14:19:09.577395] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:08:30.690  [2024-11-20 14:19:09.577656] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:08:30.690  
00:08:30.690  [2024-11-20 14:19:09.577693] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:08:31.693  
00:08:31.693  real	0m2.101s
00:08:31.693  user	0m1.776s
00:08:31.693  sys	0m0.211s
00:08:31.693   14:19:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:31.693   14:19:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:08:31.693  ************************************
00:08:31.693  END TEST bdev_hello_world
00:08:31.693  ************************************
00:08:31.693   14:19:10 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:08:31.693   14:19:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:08:31.693   14:19:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:31.693   14:19:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:31.693  ************************************
00:08:31.693  START TEST bdev_bounds
00:08:31.693  ************************************
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61511
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:08:31.693  Process bdevio pid: 61511
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61511'
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61511
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61511 ']'
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:31.693  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:31.693   14:19:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:08:31.988  [2024-11-20 14:19:10.759672] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:31.988  [2024-11-20 14:19:10.759829] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61511 ]
00:08:31.988  [2024-11-20 14:19:10.938405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:32.247  [2024-11-20 14:19:11.069542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:32.247  [2024-11-20 14:19:11.069667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:32.247  [2024-11-20 14:19:11.069677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:08:33.182   14:19:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:33.182   14:19:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:08:33.182   14:19:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:08:33.182  I/O targets:
00:08:33.182    Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:08:33.182    Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:08:33.182    Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:08:33.182    Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:08:33.182    Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:08:33.182    Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB)
00:08:33.182  
00:08:33.182  
00:08:33.182       CUnit - A unit testing framework for C - Version 2.1-3
00:08:33.182       http://cunit.sourceforge.net/
00:08:33.182  
00:08:33.182  
00:08:33.182  Suite: bdevio tests on: Nvme3n1
00:08:33.182    Test: blockdev write read block ...passed
00:08:33.182    Test: blockdev write zeroes read block ...passed
00:08:33.182    Test: blockdev write zeroes read no split ...passed
00:08:33.182    Test: blockdev write zeroes read split ...passed
00:08:33.182    Test: blockdev write zeroes read split partial ...passed
00:08:33.182    Test: blockdev reset ...[2024-11-20 14:19:12.025231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller
00:08:33.182  [2024-11-20 14:19:12.029392] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful.
00:08:33.182  passed
00:08:33.182    Test: blockdev write read 8 blocks ...passed
00:08:33.182    Test: blockdev write read size > 128k ...passed
00:08:33.182    Test: blockdev write read invalid size ...passed
00:08:33.182    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:33.182    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:33.182    Test: blockdev write read max offset ...passed
00:08:33.182    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:33.182    Test: blockdev writev readv 8 blocks ...passed
00:08:33.182    Test: blockdev writev readv 30 x 1block ...passed
00:08:33.182    Test: blockdev writev readv block ...passed
00:08:33.182    Test: blockdev writev readv size > 128k ...passed
00:08:33.182    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:33.182    Test: blockdev comparev and writev ...[2024-11-20 14:19:12.038085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb00a000 len:0x1000
00:08:33.183  [2024-11-20 14:19:12.038161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:33.183  passed
00:08:33.183    Test: blockdev nvme passthru rw ...passed
00:08:33.183    Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:19:12.039143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:33.183  [2024-11-20 14:19:12.039200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:33.183  passed
00:08:33.183    Test: blockdev nvme admin passthru ...passed
00:08:33.183    Test: blockdev copy ...passed
00:08:33.183  Suite: bdevio tests on: Nvme2n3
00:08:33.183    Test: blockdev write read block ...passed
00:08:33.183    Test: blockdev write zeroes read block ...passed
00:08:33.183    Test: blockdev write zeroes read no split ...passed
00:08:33.183    Test: blockdev write zeroes read split ...passed
00:08:33.183    Test: blockdev write zeroes read split partial ...passed
00:08:33.183    Test: blockdev reset ...[2024-11-20 14:19:12.121350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:08:33.183  [2024-11-20 14:19:12.125967] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:08:33.183  passed
00:08:33.183    Test: blockdev write read 8 blocks ...passed
00:08:33.183    Test: blockdev write read size > 128k ...passed
00:08:33.183    Test: blockdev write read invalid size ...passed
00:08:33.183    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:33.183    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:33.183    Test: blockdev write read max offset ...passed
00:08:33.183    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:33.183    Test: blockdev writev readv 8 blocks ...passed
00:08:33.183    Test: blockdev writev readv 30 x 1block ...passed
00:08:33.183    Test: blockdev writev readv block ...passed
00:08:33.183    Test: blockdev writev readv size > 128k ...passed
00:08:33.183    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:33.183    Test: blockdev comparev and writev ...[2024-11-20 14:19:12.134929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29e206000 len:0x1000
00:08:33.183  [2024-11-20 14:19:12.135033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:33.183  passed
00:08:33.183    Test: blockdev nvme passthru rw ...passed
00:08:33.183    Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:19:12.136041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:33.183  [2024-11-20 14:19:12.136101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:33.183  passed
00:08:33.183    Test: blockdev nvme admin passthru ...passed
00:08:33.183    Test: blockdev copy ...passed
00:08:33.183  Suite: bdevio tests on: Nvme2n2
00:08:33.183    Test: blockdev write read block ...passed
00:08:33.183    Test: blockdev write zeroes read block ...passed
00:08:33.183    Test: blockdev write zeroes read no split ...passed
00:08:33.443    Test: blockdev write zeroes read split ...passed
00:08:33.443    Test: blockdev write zeroes read split partial ...passed
00:08:33.443    Test: blockdev reset ...[2024-11-20 14:19:12.213589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:08:33.443  [2024-11-20 14:19:12.218019] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:08:33.443  passed
00:08:33.443    Test: blockdev write read 8 blocks ...passed
00:08:33.443    Test: blockdev write read size > 128k ...passed
00:08:33.443    Test: blockdev write read invalid size ...passed
00:08:33.443    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:33.443    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:33.443    Test: blockdev write read max offset ...passed
00:08:33.443    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:33.443    Test: blockdev writev readv 8 blocks ...passed
00:08:33.443    Test: blockdev writev readv 30 x 1block ...passed
00:08:33.443    Test: blockdev writev readv block ...passed
00:08:33.443    Test: blockdev writev readv size > 128k ...passed
00:08:33.443    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:33.443    Test: blockdev comparev and writev ...[2024-11-20 14:19:12.226639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed
00:08:33.443    Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cb03c000 len:0x1000
00:08:33.443  [2024-11-20 14:19:12.226853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:33.443  passed
00:08:33.443    Test: blockdev nvme passthru vendor specific ...passed
00:08:33.443    Test: blockdev nvme admin passthru ...[2024-11-20 14:19:12.227790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:33.443  [2024-11-20 14:19:12.227851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:33.443  passed
00:08:33.443    Test: blockdev copy ...passed
00:08:33.443  Suite: bdevio tests on: Nvme2n1
00:08:33.443    Test: blockdev write read block ...passed
00:08:33.443    Test: blockdev write zeroes read block ...passed
00:08:33.443    Test: blockdev write zeroes read no split ...passed
00:08:33.443    Test: blockdev write zeroes read split ...passed
00:08:33.443    Test: blockdev write zeroes read split partial ...passed
00:08:33.443    Test: blockdev reset ...[2024-11-20 14:19:12.305649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:08:33.443  passed
00:08:33.443    Test: blockdev write read 8 blocks ...[2024-11-20 14:19:12.310040] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:08:33.443  passed
00:08:33.443    Test: blockdev write read size > 128k ...passed
00:08:33.443    Test: blockdev write read invalid size ...passed
00:08:33.443    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:33.443    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:33.443    Test: blockdev write read max offset ...passed
00:08:33.443    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:33.443    Test: blockdev writev readv 8 blocks ...passed
00:08:33.443    Test: blockdev writev readv 30 x 1block ...passed
00:08:33.443    Test: blockdev writev readv block ...passed
00:08:33.443    Test: blockdev writev readv size > 128k ...passed
00:08:33.443    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:33.443    Test: blockdev comparev and writev ...[2024-11-20 14:19:12.319538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb038000 len:0x1000
00:08:33.443  [2024-11-20 14:19:12.319642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:33.443  passed
00:08:33.443    Test: blockdev nvme passthru rw ...passed
00:08:33.443    Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:19:12.320516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:33.443  [2024-11-20 14:19:12.320562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:33.443  passed
00:08:33.443    Test: blockdev nvme admin passthru ...passed
00:08:33.443    Test: blockdev copy ...passed
00:08:33.443  Suite: bdevio tests on: Nvme1n1
00:08:33.443    Test: blockdev write read block ...passed
00:08:33.443    Test: blockdev write zeroes read block ...passed
00:08:33.443    Test: blockdev write zeroes read no split ...passed
00:08:33.443    Test: blockdev write zeroes read split ...passed
00:08:33.443    Test: blockdev write zeroes read split partial ...passed
00:08:33.443    Test: blockdev reset ...[2024-11-20 14:19:12.395003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:08:33.443  [2024-11-20 14:19:12.398838] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:08:33.443  passed
00:08:33.443    Test: blockdev write read 8 blocks ...passed
00:08:33.443    Test: blockdev write read size > 128k ...passed
00:08:33.443    Test: blockdev write read invalid size ...passed
00:08:33.443    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:33.443    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:33.443    Test: blockdev write read max offset ...passed
00:08:33.443    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:33.443    Test: blockdev writev readv 8 blocks ...passed
00:08:33.443    Test: blockdev writev readv 30 x 1block ...passed
00:08:33.443    Test: blockdev writev readv block ...passed
00:08:33.443    Test: blockdev writev readv size > 128k ...passed
00:08:33.443    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:33.443    Test: blockdev comparev and writev ...[2024-11-20 14:19:12.408222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed
00:08:33.443    Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cb034000 len:0x1000
00:08:33.443  [2024-11-20 14:19:12.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:08:33.443  passed
00:08:33.443    Test: blockdev nvme passthru vendor specific ...passed
00:08:33.443    Test: blockdev nvme admin passthru ...[2024-11-20 14:19:12.409307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:08:33.443  [2024-11-20 14:19:12.409370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:08:33.443  passed
00:08:33.443    Test: blockdev copy ...passed
00:08:33.443  Suite: bdevio tests on: Nvme0n1
00:08:33.443    Test: blockdev write read block ...passed
00:08:33.443    Test: blockdev write zeroes read block ...passed
00:08:33.703    Test: blockdev write zeroes read no split ...passed
00:08:33.703    Test: blockdev write zeroes read split ...passed
00:08:33.703    Test: blockdev write zeroes read split partial ...passed
00:08:33.703    Test: blockdev reset ...[2024-11-20 14:19:12.488094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:08:33.703  [2024-11-20 14:19:12.491861] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:08:33.703  passed
00:08:33.703    Test: blockdev write read 8 blocks ...passed
00:08:33.703    Test: blockdev write read size > 128k ...passed
00:08:33.703    Test: blockdev write read invalid size ...passed
00:08:33.703    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:33.703    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:33.703    Test: blockdev write read max offset ...passed
00:08:33.703    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:33.703    Test: blockdev writev readv 8 blocks ...passed
00:08:33.703    Test: blockdev writev readv 30 x 1block ...passed
00:08:33.703    Test: blockdev writev readv block ...passed
00:08:33.703    Test: blockdev writev readv size > 128k ...passed
00:08:33.703    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:33.703    Test: blockdev comparev and writev ...passed
00:08:33.703    Test: blockdev nvme passthru rw ...[2024-11-20 14:19:12.499939] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has
00:08:33.703  separate metadata which is not supported yet.
00:08:33.703  passed
00:08:33.703    Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:19:12.500388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0
00:08:33.703  [2024-11-20 14:19:12.500444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
00:08:33.703  passed
00:08:33.703    Test: blockdev nvme admin passthru ...passed
00:08:33.703    Test: blockdev copy ...passed
00:08:33.703  
00:08:33.703  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:33.703                suites      6      6    n/a      0        0
00:08:33.703                 tests    138    138    138      0        0
00:08:33.703               asserts    893    893    893      0      n/a
00:08:33.703  
00:08:33.703  Elapsed time =    1.483 seconds
00:08:33.703  0
00:08:33.703   14:19:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61511
00:08:33.703   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61511 ']'
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61511
00:08:33.704    14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:33.704    14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61511
00:08:33.704  killing process with pid 61511
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61511'
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61511
00:08:33.704   14:19:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61511
00:08:34.640  ************************************
00:08:34.640  END TEST bdev_bounds
00:08:34.640  ************************************
00:08:34.640   14:19:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:08:34.640  
00:08:34.640  real	0m2.884s
00:08:34.640  user	0m7.557s
00:08:34.640  sys	0m0.401s
00:08:34.640   14:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:34.640   14:19:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:08:34.640   14:19:13 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:08:34.640   14:19:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:34.640   14:19:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:34.640   14:19:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:34.640  ************************************
00:08:34.640  START TEST bdev_nbd
00:08:34.640  ************************************
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:08:34.640    14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61572
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61572 /var/tmp/spdk-nbd.sock
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61572 ']'
00:08:34.640   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:34.641   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:34.641   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:34.641  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:34.641   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:34.641   14:19:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:08:34.899  [2024-11-20 14:19:13.686460] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:34.899  [2024-11-20 14:19:13.686660] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:34.899  [2024-11-20 14:19:13.873130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:35.158  [2024-11-20 14:19:14.001625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:08:35.726   14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:35.726    14:19:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:08:36.292    14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:36.292  1+0 records in
00:08:36.292  1+0 records out
00:08:36.292  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572448 s, 7.2 MB/s
00:08:36.292    14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:36.292   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:36.292    14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:08:36.551    14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:36.551  1+0 records in
00:08:36.551  1+0 records out
00:08:36.551  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499719 s, 8.2 MB/s
00:08:36.551    14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:36.551   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:36.551    14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:08:36.809    14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:36.809  1+0 records in
00:08:36.809  1+0 records out
00:08:36.809  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691501 s, 5.9 MB/s
00:08:36.809    14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:36.809   14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:36.809    14:19:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:08:37.376    14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:37.376  1+0 records in
00:08:37.376  1+0 records out
00:08:37.376  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695812 s, 5.9 MB/s
00:08:37.376    14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:37.376   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:37.376    14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:08:37.635    14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:37.635  1+0 records in
00:08:37.635  1+0 records out
00:08:37.635  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589214 s, 7.0 MB/s
00:08:37.635    14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:37.635   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:37.635    14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:08:38.203    14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:38.203  1+0 records in
00:08:38.203  1+0 records out
00:08:38.203  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829515 s, 4.9 MB/s
00:08:38.203    14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:38.203   14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:08:38.203    14:19:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:38.461   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:08:38.461    {
00:08:38.461      "nbd_device": "/dev/nbd0",
00:08:38.461      "bdev_name": "Nvme0n1"
00:08:38.461    },
00:08:38.461    {
00:08:38.461      "nbd_device": "/dev/nbd1",
00:08:38.461      "bdev_name": "Nvme1n1"
00:08:38.461    },
00:08:38.461    {
00:08:38.461      "nbd_device": "/dev/nbd2",
00:08:38.461      "bdev_name": "Nvme2n1"
00:08:38.461    },
00:08:38.461    {
00:08:38.461      "nbd_device": "/dev/nbd3",
00:08:38.461      "bdev_name": "Nvme2n2"
00:08:38.461    },
00:08:38.461    {
00:08:38.461      "nbd_device": "/dev/nbd4",
00:08:38.461      "bdev_name": "Nvme2n3"
00:08:38.461    },
00:08:38.461    {
00:08:38.461      "nbd_device": "/dev/nbd5",
00:08:38.461      "bdev_name": "Nvme3n1"
00:08:38.461    }
00:08:38.461  ]'
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:08:38.462    14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:08:38.462    {
00:08:38.462      "nbd_device": "/dev/nbd0",
00:08:38.462      "bdev_name": "Nvme0n1"
00:08:38.462    },
00:08:38.462    {
00:08:38.462      "nbd_device": "/dev/nbd1",
00:08:38.462      "bdev_name": "Nvme1n1"
00:08:38.462    },
00:08:38.462    {
00:08:38.462      "nbd_device": "/dev/nbd2",
00:08:38.462      "bdev_name": "Nvme2n1"
00:08:38.462    },
00:08:38.462    {
00:08:38.462      "nbd_device": "/dev/nbd3",
00:08:38.462      "bdev_name": "Nvme2n2"
00:08:38.462    },
00:08:38.462    {
00:08:38.462      "nbd_device": "/dev/nbd4",
00:08:38.462      "bdev_name": "Nvme2n3"
00:08:38.462    },
00:08:38.462    {
00:08:38.462      "nbd_device": "/dev/nbd5",
00:08:38.462      "bdev_name": "Nvme3n1"
00:08:38.462    }
00:08:38.462  ]'
00:08:38.462    14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5'
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5')
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:38.462   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:38.719    14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:38.719   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:38.977    14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:38.977   14:19:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:08:39.236    14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:39.236   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:08:39.803    14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:39.804   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:08:40.062    14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:40.062   14:19:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:08:40.320    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:40.320   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:40.320    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:40.320    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:40.320     14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:40.577    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:40.577     14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:40.577     14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:40.577    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:40.577     14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:40.577     14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:08:40.577     14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:08:40.577    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:08:40.577    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:40.835   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:08:41.094  /dev/nbd0
00:08:41.094    14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:41.094  1+0 records in
00:08:41.094  1+0 records out
00:08:41.094  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361539 s, 11.3 MB/s
00:08:41.094    14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:41.094   14:19:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1
00:08:41.352  /dev/nbd1
00:08:41.352    14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:41.352  1+0 records in
00:08:41.352  1+0 records out
00:08:41.352  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045865 s, 8.9 MB/s
00:08:41.352    14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:41.352   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10
00:08:41.919  /dev/nbd10
00:08:41.919    14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:41.919  1+0 records in
00:08:41.919  1+0 records out
00:08:41.919  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529098 s, 7.7 MB/s
00:08:41.919    14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:41.919   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11
00:08:42.177  /dev/nbd11
00:08:42.177    14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:42.177  1+0 records in
00:08:42.177  1+0 records out
00:08:42.177  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533081 s, 7.7 MB/s
00:08:42.177    14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:42.177   14:19:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12
00:08:42.435  /dev/nbd12
00:08:42.435    14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:08:42.435   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:42.436  1+0 records in
00:08:42.436  1+0 records out
00:08:42.436  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058296 s, 7.0 MB/s
00:08:42.436    14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:42.436   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13
00:08:42.693  /dev/nbd13
00:08:42.693    14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:42.693  1+0 records in
00:08:42.693  1+0 records out
00:08:42.693  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689773 s, 5.9 MB/s
00:08:42.693    14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:42.693   14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:08:42.693    14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:42.693    14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:42.693     14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:43.310    14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd0",
00:08:43.310      "bdev_name": "Nvme0n1"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd1",
00:08:43.310      "bdev_name": "Nvme1n1"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd10",
00:08:43.310      "bdev_name": "Nvme2n1"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd11",
00:08:43.310      "bdev_name": "Nvme2n2"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd12",
00:08:43.310      "bdev_name": "Nvme2n3"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd13",
00:08:43.310      "bdev_name": "Nvme3n1"
00:08:43.310    }
00:08:43.310  ]'
00:08:43.310     14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:43.310     14:19:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd0",
00:08:43.310      "bdev_name": "Nvme0n1"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd1",
00:08:43.310      "bdev_name": "Nvme1n1"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd10",
00:08:43.310      "bdev_name": "Nvme2n1"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd11",
00:08:43.310      "bdev_name": "Nvme2n2"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd12",
00:08:43.310      "bdev_name": "Nvme2n3"
00:08:43.310    },
00:08:43.310    {
00:08:43.310      "nbd_device": "/dev/nbd13",
00:08:43.310      "bdev_name": "Nvme3n1"
00:08:43.310    }
00:08:43.310  ]'
00:08:43.310    14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:08:43.310  /dev/nbd1
00:08:43.310  /dev/nbd10
00:08:43.310  /dev/nbd11
00:08:43.310  /dev/nbd12
00:08:43.310  /dev/nbd13'
00:08:43.310     14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:08:43.310  /dev/nbd1
00:08:43.310  /dev/nbd10
00:08:43.310  /dev/nbd11
00:08:43.310  /dev/nbd12
00:08:43.310  /dev/nbd13'
00:08:43.310     14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:43.310    14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6
00:08:43.310    14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']'
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:08:43.310  256+0 records in
00:08:43.310  256+0 records out
00:08:43.310  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00792043 s, 132 MB/s
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:43.310  256+0 records in
00:08:43.310  256+0 records out
00:08:43.310  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130204 s, 8.1 MB/s
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:43.310   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:08:43.605  256+0 records in
00:08:43.605  256+0 records out
00:08:43.605  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149221 s, 7.0 MB/s
00:08:43.605   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:43.605   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:08:43.605  256+0 records in
00:08:43.605  256+0 records out
00:08:43.605  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144558 s, 7.3 MB/s
00:08:43.605   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:43.605   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:08:43.864  256+0 records in
00:08:43.864  256+0 records out
00:08:43.864  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152468 s, 6.9 MB/s
00:08:43.864   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:43.864   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:08:43.864  256+0 records in
00:08:43.864  256+0 records out
00:08:43.864  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149934 s, 7.0 MB/s
00:08:43.864   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:43.864   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:08:44.123  256+0 records in
00:08:44.123  256+0 records out
00:08:44.123  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133915 s, 7.8 MB/s
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:44.123   14:19:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:44.382    14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:44.382   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:08:44.948    14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:44.948   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:08:45.206    14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:45.206   14:19:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:08:45.464    14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:45.464   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:08:45.723    14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:45.723   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:08:45.981    14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:45.981   14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:45.981    14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:45.981    14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:45.981     14:19:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:46.240    14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:46.240     14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:46.240     14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:46.499    14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:46.499     14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:46.499     14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:08:46.499     14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:08:46.499    14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:08:46.499    14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:08:46.499   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:08:46.757  malloc_lvol_verify
00:08:46.757   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:08:47.016  ecd19eb7-6527-4e9b-851b-477b76bfd7fc
00:08:47.016   14:19:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:08:47.275  86153164-b226-4a36-b5cc-38a2d7df92ff
00:08:47.275   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:08:47.534  /dev/nbd0
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:08:47.534  mke2fs 1.47.0 (5-Feb-2023)
00:08:47.534  Discarding device blocks:    0/4096         done                            
00:08:47.534  Creating filesystem with 4096 1k blocks and 1024 inodes
00:08:47.534  
00:08:47.534  Allocating group tables: 0/1   done                            
00:08:47.534  Writing inode tables: 0/1   done                            
00:08:47.534  Creating journal (1024 blocks): done
00:08:47.534  Writing superblocks and filesystem accounting information: 0/1   done
00:08:47.534  
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:47.534   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:48.102    14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61572
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61572 ']'
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61572
00:08:48.102    14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:48.102    14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61572
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:48.102  killing process with pid 61572
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61572'
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61572
00:08:48.102   14:19:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61572
00:08:49.038   14:19:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:08:49.038  
00:08:49.038  real	0m14.313s
00:08:49.038  user	0m21.071s
00:08:49.038  sys	0m4.387s
00:08:49.038   14:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:49.038   14:19:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:08:49.038  ************************************
00:08:49.038  END TEST bdev_nbd
00:08:49.038  ************************************
00:08:49.038   14:19:27 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:08:49.038   14:19:27 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']'
00:08:49.038  skipping fio tests on NVMe due to multi-ns failures.
00:08:49.038   14:19:27 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:08:49.038   14:19:27 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:08:49.038   14:19:27 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:08:49.038   14:19:27 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:08:49.038   14:19:27 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:49.038   14:19:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:49.038  ************************************
00:08:49.038  START TEST bdev_verify
00:08:49.038  ************************************
00:08:49.038   14:19:27 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:08:49.297  [2024-11-20 14:19:28.024399] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:49.297  [2024-11-20 14:19:28.024547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61994 ]
00:08:49.297  [2024-11-20 14:19:28.204772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:49.556  [2024-11-20 14:19:28.335101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:49.556  [2024-11-20 14:19:28.335107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:50.121  Running I/O for 5 seconds...
00:08:52.431      19264.00 IOPS,    75.25 MiB/s
[2024-11-20T14:19:32.348Z]     19584.00 IOPS,    76.50 MiB/s
[2024-11-20T14:19:33.722Z]     19754.67 IOPS,    77.17 MiB/s
[2024-11-20T14:19:34.288Z]     19776.00 IOPS,    77.25 MiB/s
[2024-11-20T14:19:34.288Z]     19596.80 IOPS,    76.55 MiB/s
00:08:55.306                                                                                                  Latency(us)
00:08:55.306  
[2024-11-20T14:19:34.288Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:08:55.306  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x0 length 0xbd0bd
00:08:55.306  	 Nvme0n1             :       5.06    1592.59       6.22       0.00     0.00   80149.40   15966.95  107717.35
00:08:55.306  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:08:55.306  	 Nvme0n1             :       5.04    1624.94       6.35       0.00     0.00   78507.25   16086.11  121062.87
00:08:55.306  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x0 length 0xa0000
00:08:55.306  	 Nvme1n1             :       5.07    1591.16       6.22       0.00     0.00   80042.34   18826.71  101521.22
00:08:55.306  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0xa0000 length 0xa0000
00:08:55.306  	 Nvme1n1             :       5.04    1624.40       6.35       0.00     0.00   78369.73   17754.30  114866.73
00:08:55.306  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x0 length 0x80000
00:08:55.306  	 Nvme2n1             :       5.07    1590.37       6.21       0.00     0.00   79934.76   21686.46  100091.35
00:08:55.306  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x80000 length 0x80000
00:08:55.306  	 Nvme2n1             :       5.06    1630.46       6.37       0.00     0.00   77889.19    6076.97  110577.11
00:08:55.306  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x0 length 0x80000
00:08:55.306  	 Nvme2n2             :       5.07    1589.59       6.21       0.00     0.00   79793.78   21686.46  102951.10
00:08:55.306  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x80000 length 0x80000
00:08:55.306  	 Nvme2n2             :       5.08    1637.73       6.40       0.00     0.00   77498.27   10902.81  103904.35
00:08:55.306  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x0 length 0x80000
00:08:55.306  	 Nvme2n3             :       5.08    1588.86       6.21       0.00     0.00   79658.48   17635.14  106287.48
00:08:55.306  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x80000 length 0x80000
00:08:55.306  	 Nvme2n3             :       5.08    1637.25       6.40       0.00     0.00   77361.52   10485.76  109147.23
00:08:55.306  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x0 length 0x20000
00:08:55.306  	 Nvme3n1             :       5.08    1588.02       6.20       0.00     0.00   79550.71   11081.54  109623.85
00:08:55.306  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:08:55.306  	 Verification LBA range: start 0x20000 length 0x20000
00:08:55.306  	 Nvme3n1             :       5.08    1636.73       6.39       0.00     0.00   77244.41   10247.45  117726.49
00:08:55.306  
[2024-11-20T14:19:34.288Z]  ===================================================================================================================
00:08:55.306  
[2024-11-20T14:19:34.288Z]  Total                       :              19332.10      75.52       0.00     0.00   78818.95    6076.97  121062.87
00:08:56.682  
00:08:56.682  real	0m7.698s
00:08:56.682  user	0m14.212s
00:08:56.682  sys	0m0.260s
00:08:56.682   14:19:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.682   14:19:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:08:56.682  ************************************
00:08:56.682  END TEST bdev_verify
00:08:56.682  ************************************
00:08:56.940   14:19:35 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:08:56.940   14:19:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:08:56.940   14:19:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:56.940   14:19:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:08:56.940  ************************************
00:08:56.940  START TEST bdev_verify_big_io
00:08:56.940  ************************************
00:08:56.940   14:19:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:08:56.940  [2024-11-20 14:19:35.765358] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:08:56.940  [2024-11-20 14:19:35.765755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62098 ]
00:08:57.199  [2024-11-20 14:19:35.940956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:57.199  [2024-11-20 14:19:36.053609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:57.199  [2024-11-20 14:19:36.053621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:58.133  Running I/O for 5 seconds...
00:09:03.330       1717.00 IOPS,   107.31 MiB/s
[2024-11-20T14:19:42.878Z]      2388.00 IOPS,   149.25 MiB/s
[2024-11-20T14:19:43.136Z]      2719.67 IOPS,   169.98 MiB/s
00:09:04.154                                                                                                  Latency(us)
00:09:04.154  
[2024-11-20T14:19:43.136Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:04.154  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:04.154  	 Verification LBA range: start 0x0 length 0xbd0b
00:09:04.154  	 Nvme0n1             :       5.74     128.27       8.02       0.00     0.00  963676.65   11617.75 1151527.10
00:09:04.154  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:04.154  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:09:04.155  	 Nvme0n1             :       5.74     117.01       7.31       0.00     0.00 1061582.04   24188.74 1021884.97
00:09:04.155  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x0 length 0xa000
00:09:04.155  	 Nvme1n1             :       5.74     124.50       7.78       0.00     0.00  955154.08   88175.71  976128.93
00:09:04.155  Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0xa000 length 0xa000
00:09:04.155  	 Nvme1n1             :       5.75     116.88       7.30       0.00     0.00 1032754.80   81979.58  854112.81
00:09:04.155  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x0 length 0x8000
00:09:04.155  	 Nvme2n1             :       5.88     122.12       7.63       0.00     0.00  935026.29  117726.49 1494697.43
00:09:04.155  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x8000 length 0x8000
00:09:04.155  	 Nvme2n1             :       5.80     121.46       7.59       0.00     0.00  971726.07   42896.29  907494.87
00:09:04.155  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x0 length 0x8000
00:09:04.155  	 Nvme2n2             :       5.95     131.82       8.24       0.00     0.00  843752.75   23592.96 1525201.45
00:09:04.155  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x8000 length 0x8000
00:09:04.155  	 Nvme2n2             :       5.80     121.41       7.59       0.00     0.00  942044.79   43849.54  960876.92
00:09:04.155  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x0 length 0x8000
00:09:04.155  	 Nvme2n3             :       5.97     136.61       8.54       0.00     0.00  785933.19   14715.81 1563331.49
00:09:04.155  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x8000 length 0x8000
00:09:04.155  	 Nvme2n3             :       5.88     125.59       7.85       0.00     0.00  878954.49   33602.09 1006632.96
00:09:04.155  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x0 length 0x2000
00:09:04.155  	 Nvme3n1             :       6.03     161.79      10.11       0.00     0.00  650464.10    1072.41 1593835.52
00:09:04.155  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:04.155  	 Verification LBA range: start 0x2000 length 0x2000
00:09:04.155  	 Nvme3n1             :       5.89     141.29       8.83       0.00     0.00  764366.40    3291.69 1021884.97
00:09:04.155  
[2024-11-20T14:19:43.137Z]  ===================================================================================================================
00:09:04.155  
[2024-11-20T14:19:43.137Z]  Total                       :               1548.76      96.80       0.00     0.00  886802.45    1072.41 1593835.52
00:09:05.591  
00:09:05.591  real	0m8.892s
00:09:05.591  user	0m16.596s
00:09:05.591  sys	0m0.269s
00:09:05.591   14:19:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:05.591   14:19:44 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:09:05.591  ************************************
00:09:05.591  END TEST bdev_verify_big_io
00:09:05.591  ************************************
00:09:05.850   14:19:44 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:05.850   14:19:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:09:05.850   14:19:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:05.850   14:19:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:05.850  ************************************
00:09:05.850  START TEST bdev_write_zeroes
00:09:05.850  ************************************
00:09:05.850   14:19:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:05.850  [2024-11-20 14:19:44.748402] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:05.850  [2024-11-20 14:19:44.748663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62213 ]
00:09:06.109  [2024-11-20 14:19:44.940301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:06.109  [2024-11-20 14:19:45.071201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:07.044  Running I/O for 1 seconds...
00:09:07.979      38784.00 IOPS,   151.50 MiB/s
00:09:07.979                                                                                                  Latency(us)
00:09:07.979  
[2024-11-20T14:19:46.961Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:07.979  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:07.979  	 Nvme0n1             :       1.03    6491.67      25.36       0.00     0.00   19669.86    6583.39   50283.99
00:09:07.979  Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:07.979  	 Nvme1n1             :       1.03    6483.28      25.33       0.00     0.00   19667.20   12451.84   52905.43
00:09:07.979  Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:07.979  	 Nvme2n1             :       1.03    6475.26      25.29       0.00     0.00   19638.03   12273.11   50760.61
00:09:07.979  Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:07.979  	 Nvme2n2             :       1.03    6467.15      25.26       0.00     0.00   19594.24   12213.53   51237.24
00:09:07.979  Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:07.979  	 Nvme2n3             :       1.03    6459.01      25.23       0.00     0.00   19584.55   11439.01   48854.11
00:09:07.979  Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:07.979  	 Nvme3n1             :       1.03    6450.95      25.20       0.00     0.00   19506.52    9711.24   47185.92
00:09:07.979  
[2024-11-20T14:19:46.961Z]  ===================================================================================================================
00:09:07.979  
[2024-11-20T14:19:46.961Z]  Total                       :              38827.31     151.67       0.00     0.00   19610.07    6583.39   52905.43
00:09:08.915  
00:09:08.915  real	0m3.259s
00:09:08.915  user	0m2.860s
00:09:08.915  sys	0m0.267s
00:09:08.915   14:19:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:08.915   14:19:47 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:09:08.915  ************************************
00:09:08.915  END TEST bdev_write_zeroes
00:09:08.915  ************************************
00:09:09.173   14:19:47 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:09.173   14:19:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:09:09.173   14:19:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:09.173   14:19:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:09.173  ************************************
00:09:09.173  START TEST bdev_json_nonenclosed
00:09:09.173  ************************************
00:09:09.173   14:19:47 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:09.173  [2024-11-20 14:19:48.022862] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:09.173  [2024-11-20 14:19:48.023022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62270 ]
00:09:09.431  [2024-11-20 14:19:48.203525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:09.431  [2024-11-20 14:19:48.331979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:09.431  [2024-11-20 14:19:48.332131] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:09:09.432  [2024-11-20 14:19:48.332167] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:09:09.432  [2024-11-20 14:19:48.332186] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:09.690  
00:09:09.690  real	0m0.736s
00:09:09.690  user	0m0.514s
00:09:09.690  sys	0m0.115s
00:09:09.690   14:19:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:09.690   14:19:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:09:09.690  ************************************
00:09:09.690  END TEST bdev_json_nonenclosed
00:09:09.690  ************************************
00:09:09.949   14:19:48 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:09.949   14:19:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:09:09.949   14:19:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:09.949   14:19:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:09.949  ************************************
00:09:09.949  START TEST bdev_json_nonarray
00:09:09.949  ************************************
00:09:09.949   14:19:48 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:09.949  [2024-11-20 14:19:48.782547] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:09.949  [2024-11-20 14:19:48.782726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62297 ]
00:09:10.208  [2024-11-20 14:19:48.956213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:10.208  [2024-11-20 14:19:49.065516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:10.208  [2024-11-20 14:19:49.065665] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:09:10.208  [2024-11-20 14:19:49.065700] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:09:10.208  [2024-11-20 14:19:49.065717] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:10.466  
00:09:10.466  real	0m0.632s
00:09:10.466  user	0m0.410s
00:09:10.466  sys	0m0.116s
00:09:10.466   14:19:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:10.466   14:19:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:09:10.466  ************************************
00:09:10.466  END TEST bdev_json_nonarray
00:09:10.466  ************************************
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]]
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]]
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]]
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:09:10.466   14:19:49 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:09:10.466  
00:09:10.466  real	0m44.965s
00:09:10.466  user	1m9.409s
00:09:10.466  sys	0m6.834s
00:09:10.466   14:19:49 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:10.466   14:19:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:09:10.466  ************************************
00:09:10.466  END TEST blockdev_nvme
00:09:10.466  ************************************
00:09:10.466    14:19:49  -- spdk/autotest.sh@209 -- # uname -s
00:09:10.466   14:19:49  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:09:10.466   14:19:49  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:09:10.466   14:19:49  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:10.466   14:19:49  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:10.466   14:19:49  -- common/autotest_common.sh@10 -- # set +x
00:09:10.466  ************************************
00:09:10.466  START TEST blockdev_nvme_gpt
00:09:10.466  ************************************
00:09:10.466   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:09:10.724  * Looking for test storage...
00:09:10.724  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:10.724     14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version
00:09:10.724     14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:10.724     14:19:49 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:10.724    14:19:49 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:10.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.724  		--rc genhtml_branch_coverage=1
00:09:10.724  		--rc genhtml_function_coverage=1
00:09:10.724  		--rc genhtml_legend=1
00:09:10.724  		--rc geninfo_all_blocks=1
00:09:10.724  		--rc geninfo_unexecuted_blocks=1
00:09:10.724  		
00:09:10.724  		'
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:10.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.724  		--rc genhtml_branch_coverage=1
00:09:10.724  		--rc genhtml_function_coverage=1
00:09:10.724  		--rc genhtml_legend=1
00:09:10.724  		--rc geninfo_all_blocks=1
00:09:10.724  		--rc geninfo_unexecuted_blocks=1
00:09:10.724  		
00:09:10.724  		'
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:10.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.724  		--rc genhtml_branch_coverage=1
00:09:10.724  		--rc genhtml_function_coverage=1
00:09:10.724  		--rc genhtml_legend=1
00:09:10.724  		--rc geninfo_all_blocks=1
00:09:10.724  		--rc geninfo_unexecuted_blocks=1
00:09:10.724  		
00:09:10.724  		'
00:09:10.724    14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:10.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.724  		--rc genhtml_branch_coverage=1
00:09:10.724  		--rc genhtml_function_coverage=1
00:09:10.724  		--rc genhtml_legend=1
00:09:10.724  		--rc geninfo_all_blocks=1
00:09:10.724  		--rc geninfo_unexecuted_blocks=1
00:09:10.724  		
00:09:10.724  		'
00:09:10.724   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:09:10.725    14:19:49 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:09:10.725    14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device=
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek=
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx=
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]]
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]]
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62381
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62381
00:09:10.725   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62381 ']'
00:09:10.725   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:10.725   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:10.725   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:10.725  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:10.725   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:10.725   14:19:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:10.725   14:19:49 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:09:10.983  [2024-11-20 14:19:49.784548] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:10.983  [2024-11-20 14:19:49.784765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62381 ]
00:09:11.241  [2024-11-20 14:19:49.988367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:11.241  [2024-11-20 14:19:50.163133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:12.177   14:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:12.177   14:19:50 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:09:12.177   14:19:50 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:09:12.177   14:19:50 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf
00:09:12.177   14:19:50 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:09:12.435  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:12.435  Waiting for block devices as requested
00:09:12.695  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:09:12.695  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:09:12.695  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:09:12.954  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:09:18.216  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1')
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:09:18.216    14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:09:18.216  BYT;
00:09:18.216  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:09:18.216  BYT;
00:09:18.216  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:09:18.216   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:09:18.216    14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:09:18.216     14:19:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:18.216    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:18.217   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:18.217    14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:09:18.217     14:19:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:18.217    14:19:56 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:18.217   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:18.217   14:19:56 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:09:19.152  The operation has completed successfully.
00:09:19.152   14:19:57 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:09:20.086  The operation has completed successfully.
00:09:20.086   14:19:58 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:09:20.664  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:09:21.237  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:09:21.237  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:09:21.237  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:09:21.237  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:09:21.237   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:09:21.237   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.237   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.237  []
00:09:21.237   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.237   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:09:21.237   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:09:21.237   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:09:21.237    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:09:21.494   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\'''
00:09:21.494   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.494   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.752   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.752   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:09:21.752   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.752   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.752   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.752   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat
00:09:21.752    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:09:21.752    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.752    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.752    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.753    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.753    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.753   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:09:21.753    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:09:21.753    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:21.753    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.753   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:09:21.753    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name
00:09:21.753    14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "fa3c6e3b-9176-4f34-9148-f1f501ae9b1a"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "fa3c6e3b-9176-4f34-9148-f1f501ae9b1a",' '  "numa_id": -1,' '  "md_size": 64,' '  "md_interleave": false,' '  "dif_type": 0,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": true,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme1n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme1n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme1n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}' '{' '  "name": "Nvme2n1",' '  "aliases": [' '    "91d83532-c679-4938-8af5-a7d78fa06ef2"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "91d83532-c679-4938-8af5-a7d78fa06ef2",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n2",' '  "aliases": [' '    "9e8642b1-0011-4ade-aad8-c28349ed725a"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "9e8642b1-0011-4ade-aad8-c28349ed725a",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 2,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme2n3",' '  "aliases": [' '    "27b2078a-24e4-4100-b3ab-a98a125b5153"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "27b2078a-24e4-4100-b3ab-a98a125b5153",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:12.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:12.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12342",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12342",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 3,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}' '{' '  "name": "Nvme3n1",' '  "aliases": [' '    "ea0b01bf-b904-4dc0-b0e8-e3e896f3dfe6"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "ea0b01bf-b904-4dc0-b0e8-e3e896f3dfe6",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:13.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:13.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12343",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": true,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": true' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:09:22.012   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:09:22.012   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1
00:09:22.012   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:09:22.012   14:20:00 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62381
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62381 ']'
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62381
00:09:22.012    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:22.012    14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62381
00:09:22.012  killing process with pid 62381
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62381'
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62381
00:09:22.012   14:20:00 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62381
00:09:24.540   14:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:24.540   14:20:03 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:09:24.540   14:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:09:24.540   14:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:24.540   14:20:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:24.540  ************************************
00:09:24.540  START TEST bdev_hello_world
00:09:24.540  ************************************
00:09:24.540   14:20:03 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:09:24.540  [2024-11-20 14:20:03.145012] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:24.540  [2024-11-20 14:20:03.145254] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63018 ]
00:09:24.540  [2024-11-20 14:20:03.360315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:24.540  [2024-11-20 14:20:03.475981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:25.486  [2024-11-20 14:20:04.138655] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:09:25.486  [2024-11-20 14:20:04.138737] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:09:25.486  [2024-11-20 14:20:04.138783] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:09:25.486  [2024-11-20 14:20:04.141953] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:09:25.486  [2024-11-20 14:20:04.142511] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:09:25.486  [2024-11-20 14:20:04.142557] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:09:25.486  [2024-11-20 14:20:04.142836] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:09:25.486  
00:09:25.486  [2024-11-20 14:20:04.142887] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:09:26.432  
00:09:26.433  real	0m2.221s
00:09:26.433  user	0m1.858s
00:09:26.433  sys	0m0.245s
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:26.433  ************************************
00:09:26.433  END TEST bdev_hello_world
00:09:26.433  ************************************
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:09:26.433   14:20:05 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:09:26.433   14:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:09:26.433   14:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:26.433   14:20:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:26.433  ************************************
00:09:26.433  START TEST bdev_bounds
00:09:26.433  ************************************
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63061
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:09:26.433  Process bdevio pid: 63061
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63061'
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63061
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63061 ']'
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:26.433  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:26.433   14:20:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:09:26.433  [2024-11-20 14:20:05.369710] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:26.433  [2024-11-20 14:20:05.369871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63061 ]
00:09:26.691  [2024-11-20 14:20:05.547183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:26.691  [2024-11-20 14:20:05.654413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:26.691  [2024-11-20 14:20:05.654497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:26.691  [2024-11-20 14:20:05.654499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:27.626   14:20:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:27.626   14:20:06 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:09:27.626   14:20:06 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:09:27.626  I/O targets:
00:09:27.626    Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:09:27.626    Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:09:27.626    Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:09:27.626    Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:09:27.626    Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:09:27.626    Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:09:27.626    Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB)
00:09:27.626  
00:09:27.626  
00:09:27.626       CUnit - A unit testing framework for C - Version 2.1-3
00:09:27.626       http://cunit.sourceforge.net/
00:09:27.626  
00:09:27.626  
00:09:27.626  Suite: bdevio tests on: Nvme3n1
00:09:27.626    Test: blockdev write read block ...passed
00:09:27.626    Test: blockdev write zeroes read block ...passed
00:09:27.626    Test: blockdev write zeroes read no split ...passed
00:09:27.883    Test: blockdev write zeroes read split ...passed
00:09:27.883    Test: blockdev write zeroes read split partial ...passed
00:09:27.883    Test: blockdev reset ...[2024-11-20 14:20:06.638239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller
00:09:27.883  [2024-11-20 14:20:06.642391] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful.
00:09:27.883  passed
00:09:27.883    Test: blockdev write read 8 blocks ...passed
00:09:27.883    Test: blockdev write read size > 128k ...passed
00:09:27.883    Test: blockdev write read invalid size ...passed
00:09:27.883    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:27.883    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:27.883    Test: blockdev write read max offset ...passed
00:09:27.883    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:27.883    Test: blockdev writev readv 8 blocks ...passed
00:09:27.883    Test: blockdev writev readv 30 x 1block ...passed
00:09:27.883    Test: blockdev writev readv block ...passed
00:09:27.883    Test: blockdev writev readv size > 128k ...passed
00:09:27.883    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:27.883    Test: blockdev comparev and writev ...[2024-11-20 14:20:06.649600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8804000 len:0x1000
00:09:27.883  [2024-11-20 14:20:06.649672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:27.883  passed
00:09:27.883    Test: blockdev nvme passthru rw ...passed
00:09:27.883    Test: blockdev nvme passthru vendor specific ...passed
00:09:27.883    Test: blockdev nvme admin passthru ...[2024-11-20 14:20:06.650561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:27.883  [2024-11-20 14:20:06.650621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:27.883  passed
00:09:27.883    Test: blockdev copy ...passed
00:09:27.883  Suite: bdevio tests on: Nvme2n3
00:09:27.883    Test: blockdev write read block ...passed
00:09:27.883    Test: blockdev write zeroes read block ...passed
00:09:27.883    Test: blockdev write zeroes read no split ...passed
00:09:27.883    Test: blockdev write zeroes read split ...passed
00:09:27.883    Test: blockdev write zeroes read split partial ...passed
00:09:27.883    Test: blockdev reset ...[2024-11-20 14:20:06.726818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:09:27.883  [2024-11-20 14:20:06.730903] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:09:27.883  passed
00:09:27.883    Test: blockdev write read 8 blocks ...passed
00:09:27.883    Test: blockdev write read size > 128k ...passed
00:09:27.883    Test: blockdev write read invalid size ...passed
00:09:27.883    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:27.883    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:27.883    Test: blockdev write read max offset ...passed
00:09:27.883    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:27.883    Test: blockdev writev readv 8 blocks ...passed
00:09:27.883    Test: blockdev writev readv 30 x 1block ...passed
00:09:27.883    Test: blockdev writev readv block ...passed
00:09:27.883    Test: blockdev writev readv size > 128k ...passed
00:09:27.883    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:27.883    Test: blockdev comparev and writev ...[2024-11-20 14:20:06.739202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8802000 len:0x1000
00:09:27.883  [2024-11-20 14:20:06.739303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:27.883  passed
00:09:27.883    Test: blockdev nvme passthru rw ...passed
00:09:27.883    Test: blockdev nvme passthru vendor specific ...passed
00:09:27.883    Test: blockdev nvme admin passthru ...[2024-11-20 14:20:06.740181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:27.883  [2024-11-20 14:20:06.740249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:27.883  passed
00:09:27.883    Test: blockdev copy ...passed
00:09:27.883  Suite: bdevio tests on: Nvme2n2
00:09:27.883    Test: blockdev write read block ...passed
00:09:27.883    Test: blockdev write zeroes read block ...passed
00:09:27.883    Test: blockdev write zeroes read no split ...passed
00:09:27.883    Test: blockdev write zeroes read split ...passed
00:09:27.883    Test: blockdev write zeroes read split partial ...passed
00:09:27.883    Test: blockdev reset ...[2024-11-20 14:20:06.826632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:09:27.883  [2024-11-20 14:20:06.831247] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful.
00:09:27.883  passed
00:09:27.883    Test: blockdev write read 8 blocks ...passed
00:09:27.883    Test: blockdev write read size > 128k ...passed
00:09:27.884    Test: blockdev write read invalid size ...passed
00:09:27.884    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:27.884    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:27.884    Test: blockdev write read max offset ...passed
00:09:27.884    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:27.884    Test: blockdev writev readv 8 blocks ...passed
00:09:27.884    Test: blockdev writev readv 30 x 1block ...passed
00:09:27.884    Test: blockdev writev readv block ...passed
00:09:27.884    Test: blockdev writev readv size > 128k ...passed
00:09:27.884    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:27.884    Test: blockdev comparev and writev ...[2024-11-20 14:20:06.840239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cce38000 len:0x1000
00:09:27.884  [2024-11-20 14:20:06.840317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:27.884  passed
00:09:27.884    Test: blockdev nvme passthru rw ...passed
00:09:27.884    Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:20:06.841217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:27.884  [2024-11-20 14:20:06.841281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:27.884  passed
00:09:27.884    Test: blockdev nvme admin passthru ...passed
00:09:27.884    Test: blockdev copy ...passed
00:09:27.884  Suite: bdevio tests on: Nvme2n1
00:09:27.884    Test: blockdev write read block ...passed
00:09:27.884    Test: blockdev write zeroes read block ...passed
00:09:27.884    Test: blockdev write zeroes read no split ...passed
00:09:28.142    Test: blockdev write zeroes read split ...passed
00:09:28.142    Test: blockdev write zeroes read split partial ...passed
00:09:28.142    Test: blockdev reset ...[2024-11-20 14:20:06.926402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller
00:09:28.142  [2024-11-20 14:20:06.930534] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed
00:09:28.142    Test: blockdev write read 8 blocks ...passed
00:09:28.142    Test: blockdev write read size > 128k ...uccessful.
00:09:28.142  passed
00:09:28.142    Test: blockdev write read invalid size ...passed
00:09:28.142    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:28.142    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:28.142    Test: blockdev write read max offset ...passed
00:09:28.142    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:28.142    Test: blockdev writev readv 8 blocks ...passed
00:09:28.142    Test: blockdev writev readv 30 x 1block ...passed
00:09:28.142    Test: blockdev writev readv block ...passed
00:09:28.142    Test: blockdev writev readv size > 128k ...passed
00:09:28.142    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:28.142    Test: blockdev comparev and writev ...[2024-11-20 14:20:06.938725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cce34000 len:0x1000
00:09:28.142  [2024-11-20 14:20:06.938818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:28.142  passed
00:09:28.142    Test: blockdev nvme passthru rw ...passed
00:09:28.142    Test: blockdev nvme passthru vendor specific ...[2024-11-20 14:20:06.939665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:09:28.142  [2024-11-20 14:20:06.939719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:09:28.142  passed
00:09:28.142    Test: blockdev nvme admin passthru ...passed
00:09:28.142    Test: blockdev copy ...passed
00:09:28.142  Suite: bdevio tests on: Nvme1n1p2
00:09:28.142    Test: blockdev write read block ...passed
00:09:28.142    Test: blockdev write zeroes read block ...passed
00:09:28.142    Test: blockdev write zeroes read no split ...passed
00:09:28.142    Test: blockdev write zeroes read split ...passed
00:09:28.142    Test: blockdev write zeroes read split partial ...passed
00:09:28.142    Test: blockdev reset ...[2024-11-20 14:20:07.027046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:09:28.142  [2024-11-20 14:20:07.031184] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:09:28.142  passed
00:09:28.142    Test: blockdev write read 8 blocks ...passed
00:09:28.143    Test: blockdev write read size > 128k ...passed
00:09:28.143    Test: blockdev write read invalid size ...passed
00:09:28.143    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:28.143    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:28.143    Test: blockdev write read max offset ...passed
00:09:28.143    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:28.143    Test: blockdev writev readv 8 blocks ...passed
00:09:28.143    Test: blockdev writev readv 30 x 1block ...passed
00:09:28.143    Test: blockdev writev readv block ...passed
00:09:28.143    Test: blockdev writev readv size > 128k ...passed
00:09:28.143    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:28.143    Test: blockdev comparev and writev ...[2024-11-20 14:20:07.039223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cce30000 len:0x1000
00:09:28.143  [2024-11-20 14:20:07.039315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:28.143  passed
00:09:28.143    Test: blockdev nvme passthru rw ...passed
00:09:28.143    Test: blockdev nvme passthru vendor specific ...passed
00:09:28.143    Test: blockdev nvme admin passthru ...passed
00:09:28.143    Test: blockdev copy ...passed
00:09:28.143  Suite: bdevio tests on: Nvme1n1p1
00:09:28.143    Test: blockdev write read block ...passed
00:09:28.143    Test: blockdev write zeroes read block ...passed
00:09:28.143    Test: blockdev write zeroes read no split ...passed
00:09:28.143    Test: blockdev write zeroes read split ...passed
00:09:28.143    Test: blockdev write zeroes read split partial ...passed
00:09:28.143    Test: blockdev reset ...[2024-11-20 14:20:07.100160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller
00:09:28.143  [2024-11-20 14:20:07.103817] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful.
00:09:28.143  passed
00:09:28.143    Test: blockdev write read 8 blocks ...passed
00:09:28.143    Test: blockdev write read size > 128k ...passed
00:09:28.143    Test: blockdev write read invalid size ...passed
00:09:28.143    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:28.143    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:28.143    Test: blockdev write read max offset ...passed
00:09:28.143    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:28.143    Test: blockdev writev readv 8 blocks ...passed
00:09:28.143    Test: blockdev writev readv 30 x 1block ...passed
00:09:28.143    Test: blockdev writev readv block ...passed
00:09:28.143    Test: blockdev writev readv size > 128k ...passed
00:09:28.143    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:28.143    Test: blockdev comparev and writev ...[2024-11-20 14:20:07.111663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b920e000 len:0x1000
00:09:28.143  [2024-11-20 14:20:07.111757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:09:28.143  passed
00:09:28.143    Test: blockdev nvme passthru rw ...passed
00:09:28.143    Test: blockdev nvme passthru vendor specific ...passed
00:09:28.143    Test: blockdev nvme admin passthru ...passed
00:09:28.143    Test: blockdev copy ...passed
00:09:28.143  Suite: bdevio tests on: Nvme0n1
00:09:28.143    Test: blockdev write read block ...passed
00:09:28.143    Test: blockdev write zeroes read block ...passed
00:09:28.143    Test: blockdev write zeroes read no split ...passed
00:09:28.401    Test: blockdev write zeroes read split ...passed
00:09:28.401    Test: blockdev write zeroes read split partial ...passed
00:09:28.401    Test: blockdev reset ...[2024-11-20 14:20:07.173546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:09:28.401  [2024-11-20 14:20:07.177414] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed
00:09:28.401    Test: blockdev write read 8 blocks ...uccessful.
00:09:28.401  passed
00:09:28.401    Test: blockdev write read size > 128k ...passed
00:09:28.401    Test: blockdev write read invalid size ...passed
00:09:28.401    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:09:28.401    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:09:28.401    Test: blockdev write read max offset ...passed
00:09:28.401    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:09:28.401    Test: blockdev writev readv 8 blocks ...passed
00:09:28.401    Test: blockdev writev readv 30 x 1block ...passed
00:09:28.401    Test: blockdev writev readv block ...passed
00:09:28.401    Test: blockdev writev readv size > 128k ...passed
00:09:28.401    Test: blockdev writev readv size > 128k in two iovs ...passed
00:09:28.401    Test: blockdev comparev and writev ...passed
00:09:28.401    Test: blockdev nvme passthru rw ...[2024-11-20 14:20:07.185068] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has
00:09:28.401  separate metadata which is not supported yet.
00:09:28.401  passed
00:09:28.401    Test: blockdev nvme passthru vendor specific ...passed
00:09:28.401    Test: blockdev nvme admin passthru ...[2024-11-20 14:20:07.185774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0
00:09:28.401  [2024-11-20 14:20:07.185845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1
00:09:28.401  passed
00:09:28.401    Test: blockdev copy ...passed
00:09:28.401  
00:09:28.401  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:09:28.401                suites      7      7    n/a      0        0
00:09:28.401                 tests    161    161    161      0        0
00:09:28.401               asserts   1025   1025   1025      0      n/a
00:09:28.401  
00:09:28.401  Elapsed time =    1.708 seconds
00:09:28.401  0
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63061
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63061 ']'
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63061
00:09:28.401    14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:28.401    14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63061
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:28.401   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63061'
00:09:28.401  killing process with pid 63061
00:09:28.402   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63061
00:09:28.402   14:20:07 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63061
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:09:29.335  
00:09:29.335  real	0m2.937s
00:09:29.335  user	0m7.733s
00:09:29.335  sys	0m0.355s
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:09:29.335  ************************************
00:09:29.335  END TEST bdev_bounds
00:09:29.335  ************************************
00:09:29.335   14:20:08 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:09:29.335   14:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:09:29.335   14:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:29.335   14:20:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:29.335  ************************************
00:09:29.335  START TEST bdev_nbd
00:09:29.335  ************************************
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' ''
00:09:29.335    14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:29.335   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63121
00:09:29.336  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63121 /var/tmp/spdk-nbd.sock
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63121 ']'
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:29.336   14:20:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:09:29.594  [2024-11-20 14:20:08.359546] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:29.594  [2024-11-20 14:20:08.359716] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:09:29.594  [2024-11-20 14:20:08.537521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:29.852  [2024-11-20 14:20:08.647714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1'
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:09:30.538   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:30.538    14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:09:30.796    14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:30.796  1+0 records in
00:09:30.796  1+0 records out
00:09:30.796  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000660688 s, 6.2 MB/s
00:09:30.796    14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:30.796   14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:30.796    14:20:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:09:31.362    14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:31.362  1+0 records in
00:09:31.362  1+0 records out
00:09:31.362  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598017 s, 6.8 MB/s
00:09:31.362    14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:31.362   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:31.362    14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:09:31.620    14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:31.620  1+0 records in
00:09:31.620  1+0 records out
00:09:31.620  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657608 s, 6.2 MB/s
00:09:31.620    14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:31.620   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:31.620    14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:09:31.878    14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:31.878  1+0 records in
00:09:31.878  1+0 records out
00:09:31.878  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000552062 s, 7.4 MB/s
00:09:31.878    14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:31.878   14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:31.878    14:20:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:09:32.444    14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:32.444  1+0 records in
00:09:32.444  1+0 records out
00:09:32.444  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797884 s, 5.1 MB/s
00:09:32.444    14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:32.444   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:32.444    14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:09:32.702    14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:32.702  1+0 records in
00:09:32.702  1+0 records out
00:09:32.702  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000772433 s, 5.3 MB/s
00:09:32.702    14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:32.702   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:32.702    14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1
00:09:32.960   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:09:32.960    14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:09:33.270   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:33.271  1+0 records in
00:09:33.271  1+0 records out
00:09:33.271  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064372 s, 6.4 MB/s
00:09:33.271    14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:09:33.271   14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 ))
00:09:33.271    14:20:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd0",
00:09:33.594      "bdev_name": "Nvme0n1"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd1",
00:09:33.594      "bdev_name": "Nvme1n1p1"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd2",
00:09:33.594      "bdev_name": "Nvme1n1p2"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd3",
00:09:33.594      "bdev_name": "Nvme2n1"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd4",
00:09:33.594      "bdev_name": "Nvme2n2"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd5",
00:09:33.594      "bdev_name": "Nvme2n3"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd6",
00:09:33.594      "bdev_name": "Nvme3n1"
00:09:33.594    }
00:09:33.594  ]'
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:09:33.594    14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd0",
00:09:33.594      "bdev_name": "Nvme0n1"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd1",
00:09:33.594      "bdev_name": "Nvme1n1p1"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd2",
00:09:33.594      "bdev_name": "Nvme1n1p2"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd3",
00:09:33.594      "bdev_name": "Nvme2n1"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd4",
00:09:33.594      "bdev_name": "Nvme2n2"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd5",
00:09:33.594      "bdev_name": "Nvme2n3"
00:09:33.594    },
00:09:33.594    {
00:09:33.594      "nbd_device": "/dev/nbd6",
00:09:33.594      "bdev_name": "Nvme3n1"
00:09:33.594    }
00:09:33.594  ]'
00:09:33.594    14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6'
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6')
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:33.594   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:33.852    14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:33.852   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:34.111    14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:34.111   14:20:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:09:34.369    14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:34.369   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:09:34.628    14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:34.628   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:09:35.196    14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:35.196   14:20:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:09:35.196    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:35.196   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:09:35.757    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:35.757   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:35.757    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:35.757    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:35.757     14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:35.757    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:35.757     14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:35.757     14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:36.015    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:36.015     14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:09:36.015     14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:36.015     14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:09:36.015    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:09:36.015    14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1')
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:36.015   14:20:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:09:36.273  /dev/nbd0
00:09:36.273    14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:36.273  1+0 records in
00:09:36.273  1+0 records out
00:09:36.273  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429482 s, 9.5 MB/s
00:09:36.273    14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:36.273   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1
00:09:36.531  /dev/nbd1
00:09:36.531    14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:36.531  1+0 records in
00:09:36.531  1+0 records out
00:09:36.531  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638183 s, 6.4 MB/s
00:09:36.531    14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:36.531   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10
00:09:37.097  /dev/nbd10
00:09:37.097    14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:37.097  1+0 records in
00:09:37.097  1+0 records out
00:09:37.097  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593419 s, 6.9 MB/s
00:09:37.097    14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:37.097   14:20:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11
00:09:37.355  /dev/nbd11
00:09:37.355    14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:37.355  1+0 records in
00:09:37.355  1+0 records out
00:09:37.355  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542341 s, 7.6 MB/s
00:09:37.355    14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:37.355   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12
00:09:37.613  /dev/nbd12
00:09:37.613    14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:37.613  1+0 records in
00:09:37.613  1+0 records out
00:09:37.613  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721477 s, 5.7 MB/s
00:09:37.613    14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:37.613   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13
00:09:37.873  /dev/nbd13
00:09:37.873    14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:37.873  1+0 records in
00:09:37.873  1+0 records out
00:09:37.873  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689757 s, 5.9 MB/s
00:09:37.873    14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:37.873   14:20:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14
00:09:38.439  /dev/nbd14
00:09:38.439    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:09:38.439  1+0 records in
00:09:38.439  1+0 records out
00:09:38.439  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714657 s, 5.7 MB/s
00:09:38.439    14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:38.439   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 ))
00:09:38.439    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:38.439    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:38.439     14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:38.697    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:38.697    {
00:09:38.697      "nbd_device": "/dev/nbd0",
00:09:38.697      "bdev_name": "Nvme0n1"
00:09:38.697    },
00:09:38.697    {
00:09:38.697      "nbd_device": "/dev/nbd1",
00:09:38.697      "bdev_name": "Nvme1n1p1"
00:09:38.697    },
00:09:38.697    {
00:09:38.697      "nbd_device": "/dev/nbd10",
00:09:38.697      "bdev_name": "Nvme1n1p2"
00:09:38.697    },
00:09:38.697    {
00:09:38.697      "nbd_device": "/dev/nbd11",
00:09:38.697      "bdev_name": "Nvme2n1"
00:09:38.697    },
00:09:38.697    {
00:09:38.698      "nbd_device": "/dev/nbd12",
00:09:38.698      "bdev_name": "Nvme2n2"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd13",
00:09:38.698      "bdev_name": "Nvme2n3"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd14",
00:09:38.698      "bdev_name": "Nvme3n1"
00:09:38.698    }
00:09:38.698  ]'
00:09:38.698     14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd0",
00:09:38.698      "bdev_name": "Nvme0n1"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd1",
00:09:38.698      "bdev_name": "Nvme1n1p1"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd10",
00:09:38.698      "bdev_name": "Nvme1n1p2"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd11",
00:09:38.698      "bdev_name": "Nvme2n1"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd12",
00:09:38.698      "bdev_name": "Nvme2n2"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd13",
00:09:38.698      "bdev_name": "Nvme2n3"
00:09:38.698    },
00:09:38.698    {
00:09:38.698      "nbd_device": "/dev/nbd14",
00:09:38.698      "bdev_name": "Nvme3n1"
00:09:38.698    }
00:09:38.698  ]'
00:09:38.698     14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:38.698    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:38.698  /dev/nbd1
00:09:38.698  /dev/nbd10
00:09:38.698  /dev/nbd11
00:09:38.698  /dev/nbd12
00:09:38.698  /dev/nbd13
00:09:38.698  /dev/nbd14'
00:09:38.698     14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:38.698  /dev/nbd1
00:09:38.698  /dev/nbd10
00:09:38.698  /dev/nbd11
00:09:38.698  /dev/nbd12
00:09:38.698  /dev/nbd13
00:09:38.698  /dev/nbd14'
00:09:38.698     14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:38.698    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7
00:09:38.698    14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']'
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:09:38.698  256+0 records in
00:09:38.698  256+0 records out
00:09:38.698  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00622759 s, 168 MB/s
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:38.698  256+0 records in
00:09:38.698  256+0 records out
00:09:38.698  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128878 s, 8.1 MB/s
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:38.698   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:38.957  256+0 records in
00:09:38.957  256+0 records out
00:09:38.957  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15253 s, 6.9 MB/s
00:09:38.957   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:38.957   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:09:39.215  256+0 records in
00:09:39.215  256+0 records out
00:09:39.215  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146327 s, 7.2 MB/s
00:09:39.215   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:39.215   14:20:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:09:39.215  256+0 records in
00:09:39.215  256+0 records out
00:09:39.215  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138637 s, 7.6 MB/s
00:09:39.215   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:39.215   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:09:39.474  256+0 records in
00:09:39.474  256+0 records out
00:09:39.474  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145301 s, 7.2 MB/s
00:09:39.474   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:39.474   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:09:39.474  256+0 records in
00:09:39.474  256+0 records out
00:09:39.474  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143334 s, 7.3 MB/s
00:09:39.474   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:39.474   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:09:39.732  256+0 records in
00:09:39.732  256+0 records out
00:09:39.732  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153745 s, 6.8 MB/s
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14'
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14')
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:39.732   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:39.990    14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:39.990   14:20:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:40.556    14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:40.556   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:09:40.815    14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:40.815   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:09:41.074    14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:41.074   14:20:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:09:41.333    14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:41.333   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:09:41.592    14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:41.592   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:09:42.161    14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:42.161   14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:42.161    14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:42.161    14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:42.161     14:20:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:42.420    14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:42.420     14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:42.420     14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:42.420    14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:42.420     14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:09:42.420     14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:42.420     14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:09:42.420    14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:09:42.420    14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:09:42.420   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:09:42.678  malloc_lvol_verify
00:09:42.678   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:09:42.936  ee76a733-881c-48cf-a1d5-7dda152f3f49
00:09:43.195   14:20:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:09:43.453  5dd8ce79-a40a-43a3-bbab-03bc85392d31
00:09:43.453   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:09:43.712  /dev/nbd0
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:09:43.712  mke2fs 1.47.0 (5-Feb-2023)
00:09:43.712  Discarding device blocks:    0/4096         done                            
00:09:43.712  Creating filesystem with 4096 1k blocks and 1024 inodes
00:09:43.712  
00:09:43.712  Allocating group tables: 0/1   done                            
00:09:43.712  Writing inode tables: 0/1   done                            
00:09:43.712  Creating journal (1024 blocks): done
00:09:43.712  Writing superblocks and filesystem accounting information: 0/1   done
00:09:43.712  
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:43.712   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:43.970    14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63121
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63121 ']'
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63121
00:09:43.970    14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:43.970    14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63121
00:09:43.970  killing process with pid 63121
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63121'
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63121
00:09:43.970   14:20:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63121
00:09:45.344  ************************************
00:09:45.344  END TEST bdev_nbd
00:09:45.344  ************************************
00:09:45.344   14:20:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:09:45.344  
00:09:45.344  real	0m15.716s
00:09:45.344  user	0m23.232s
00:09:45.344  sys	0m4.748s
00:09:45.344   14:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:45.344   14:20:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:09:45.344   14:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:09:45.344   14:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']'
00:09:45.344   14:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']'
00:09:45.344  skipping fio tests on NVMe due to multi-ns failures.
00:09:45.344   14:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:09:45.344   14:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:45.344   14:20:24 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:45.344   14:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:09:45.344   14:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:45.344   14:20:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:45.344  ************************************
00:09:45.344  START TEST bdev_verify
00:09:45.344  ************************************
00:09:45.344   14:20:24 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:09:45.344  [2024-11-20 14:20:24.130065] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:45.344  [2024-11-20 14:20:24.130244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63589 ]
00:09:45.344  [2024-11-20 14:20:24.315741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:45.603  [2024-11-20 14:20:24.420215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:45.603  [2024-11-20 14:20:24.420216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:46.171  Running I/O for 5 seconds...
00:09:48.478      20224.00 IOPS,    79.00 MiB/s
[2024-11-20T14:20:28.393Z]     19712.00 IOPS,    77.00 MiB/s
[2024-11-20T14:20:29.832Z]     19669.33 IOPS,    76.83 MiB/s
[2024-11-20T14:20:30.427Z]     19872.00 IOPS,    77.62 MiB/s
[2024-11-20T14:20:30.427Z]     19891.20 IOPS,    77.70 MiB/s
00:09:51.445                                                                                                  Latency(us)
00:09:51.445  
[2024-11-20T14:20:30.427Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:51.445  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0xbd0bd
00:09:51.445  	 Nvme0n1             :       5.08    1398.55       5.46       0.00     0.00   91187.04   13762.56  127735.62
00:09:51.445  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:09:51.445  	 Nvme0n1             :       5.10    1405.19       5.49       0.00     0.00   90879.14   18469.24  113436.86
00:09:51.445  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0x4ff80
00:09:51.445  	 Nvme1n1p1           :       5.08    1397.51       5.46       0.00     0.00   91090.13   14954.12  118679.74
00:09:51.445  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:09:51.445  	 Nvme1n1p1           :       5.10    1404.64       5.49       0.00     0.00   90752.44   18350.08  119156.36
00:09:51.445  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0x4ff7f
00:09:51.445  	 Nvme1n1p2           :       5.09    1396.99       5.46       0.00     0.00   90969.50   14656.23  121539.49
00:09:51.445  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:09:51.445  	 Nvme1n1p2           :       5.11    1403.49       5.48       0.00     0.00   90641.97   20494.89  119632.99
00:09:51.445  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0x80000
00:09:51.445  	 Nvme2n1             :       5.09    1396.39       5.45       0.00     0.00   90850.97   14834.97  123922.62
00:09:51.445  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x80000 length 0x80000
00:09:51.445  	 Nvme2n1             :       5.11    1402.99       5.48       0.00     0.00   90513.78   20733.21  117726.49
00:09:51.445  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0x80000
00:09:51.445  	 Nvme2n2             :       5.09    1395.90       5.45       0.00     0.00   90726.04   14715.81  127735.62
00:09:51.445  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x80000 length 0x80000
00:09:51.445  	 Nvme2n2             :       5.11    1402.49       5.48       0.00     0.00   90380.93   20971.52  116773.24
00:09:51.445  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0x80000
00:09:51.445  	 Nvme2n3             :       5.09    1395.40       5.45       0.00     0.00   90599.83   14239.19  130595.37
00:09:51.445  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x80000 length 0x80000
00:09:51.445  	 Nvme2n3             :       5.11    1401.98       5.48       0.00     0.00   90244.29   16801.05  116296.61
00:09:51.445  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x0 length 0x20000
00:09:51.445  	 Nvme3n1             :       5.09    1394.79       5.45       0.00     0.00   90477.91   11796.48  129642.12
00:09:51.445  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:51.445  	 Verification LBA range: start 0x20000 length 0x20000
00:09:51.445  	 Nvme3n1             :       5.11    1401.49       5.47       0.00     0.00   90124.48   13107.20  115343.36
00:09:51.445  
[2024-11-20T14:20:30.427Z]  ===================================================================================================================
00:09:51.445  
[2024-11-20T14:20:30.427Z]  Total                       :              19597.80      76.55       0.00     0.00   90673.42   11796.48  130595.37
00:09:52.821  
00:09:52.821  real	0m7.644s
00:09:52.821  user	0m14.110s
00:09:52.821  sys	0m0.283s
00:09:52.821   14:20:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:52.821   14:20:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:09:52.821  ************************************
00:09:52.821  END TEST bdev_verify
00:09:52.821  ************************************
00:09:52.821   14:20:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:09:52.821   14:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:09:52.821   14:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:52.821   14:20:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:09:52.821  ************************************
00:09:52.821  START TEST bdev_verify_big_io
00:09:52.821  ************************************
00:09:52.821   14:20:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:09:53.079  [2024-11-20 14:20:31.809978] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:09:53.079  [2024-11-20 14:20:31.810127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63693 ]
00:09:53.079  [2024-11-20 14:20:31.983476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:53.338  [2024-11-20 14:20:32.091404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:53.338  [2024-11-20 14:20:32.091416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:53.905  Running I/O for 5 seconds...
00:09:59.786        790.00 IOPS,    49.38 MiB/s
[2024-11-20T14:20:39.365Z]      2657.50 IOPS,   166.09 MiB/s
[2024-11-20T14:20:39.365Z]      3104.33 IOPS,   194.02 MiB/s
00:10:00.383                                                                                                  Latency(us)
00:10:00.383  
[2024-11-20T14:20:39.365Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:00.383  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.383  	 Verification LBA range: start 0x0 length 0xbd0b
00:10:00.383  	 Nvme0n1             :       5.77     110.92       6.93       0.00     0.00 1106321.31   23712.12 1250665.19
00:10:00.383  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.383  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:10:00.383  	 Nvme0n1             :       5.62     113.91       7.12       0.00     0.00 1078632.82   32648.84 1250665.19
00:10:00.383  Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.383  	 Verification LBA range: start 0x0 length 0x4ff8
00:10:00.383  	 Nvme1n1p1           :       5.89     113.43       7.09       0.00     0.00 1057404.45   60769.75 1090519.04
00:10:00.384  Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:10:00.384  	 Nvme1n1p1           :       5.78     116.27       7.27       0.00     0.00 1027230.70   61484.68 1098145.05
00:10:00.384  Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x0 length 0x4ff7
00:10:00.384  	 Nvme1n1p2           :       5.89     113.15       7.07       0.00     0.00 1020229.29  100091.35  911307.87
00:10:00.384  Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:10:00.384  	 Nvme1n1p2           :       5.89     119.60       7.47       0.00     0.00  971156.86   99138.09  899868.86
00:10:00.384  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x0 length 0x8000
00:10:00.384  	 Nvme2n1             :       5.98     117.81       7.36       0.00     0.00  957504.66   76736.70  873177.83
00:10:00.384  Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x8000 length 0x8000
00:10:00.384  	 Nvme2n1             :       5.98     124.05       7.75       0.00     0.00  910075.84   58863.24  819795.78
00:10:00.384  Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x0 length 0x8000
00:10:00.384  	 Nvme2n2             :       6.03     121.75       7.61       0.00     0.00  902033.80   49092.42  842673.80
00:10:00.384  Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x8000 length 0x8000
00:10:00.384  	 Nvme2n2             :       5.98     128.34       8.02       0.00     0.00  859138.17   32648.84  812169.77
00:10:00.384  Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x0 length 0x8000
00:10:00.384  	 Nvme2n3             :       6.17     123.62       7.73       0.00     0.00  854496.05   73876.95  957063.91
00:10:00.384  Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x8000 length 0x8000
00:10:00.384  	 Nvme2n3             :       6.02     132.83       8.30       0.00     0.00  804054.33   35031.97  819795.78
00:10:00.384  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x0 length 0x2000
00:10:00.384  	 Nvme3n1             :       6.20      91.08       5.69       0.00     0.00 1142433.35    2115.03 2135282.04
00:10:00.384  Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:00.384  	 Verification LBA range: start 0x2000 length 0x2000
00:10:00.385  	 Nvme3n1             :       6.14     154.21       9.64       0.00     0.00  675251.89    1347.96 1044763.00
00:10:00.385  
[2024-11-20T14:20:39.367Z]  ===================================================================================================================
00:10:00.385  
[2024-11-20T14:20:39.367Z]  Total                       :               1680.99     105.06       0.00     0.00  939747.96    1347.96 2135282.04
00:10:02.284  
00:10:02.284  real	0m9.219s
00:10:02.284  user	0m17.239s
00:10:02.284  sys	0m0.270s
00:10:02.284   14:20:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:02.284   14:20:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:10:02.284  ************************************
00:10:02.284  END TEST bdev_verify_big_io
00:10:02.284  ************************************
00:10:02.284   14:20:40 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:02.284   14:20:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:10:02.284   14:20:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:02.285   14:20:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:02.285  ************************************
00:10:02.285  START TEST bdev_write_zeroes
00:10:02.285  ************************************
00:10:02.285   14:20:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:02.285  [2024-11-20 14:20:41.070138] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:10:02.285  [2024-11-20 14:20:41.070305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63813 ]
00:10:02.285  [2024-11-20 14:20:41.238823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:02.543  [2024-11-20 14:20:41.341284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:03.108  Running I/O for 1 seconds...
00:10:04.482      39872.00 IOPS,   155.75 MiB/s
00:10:04.482                                                                                                  Latency(us)
00:10:04.482  
[2024-11-20T14:20:43.464Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:04.482  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.482  	 Nvme0n1             :       1.03    5695.33      22.25       0.00     0.00   22410.86   10366.60   45041.11
00:10:04.482  Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.482  	 Nvme1n1p1           :       1.04    5685.87      22.21       0.00     0.00   22406.53   12988.04   45517.73
00:10:04.482  Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.483  	 Nvme1n1p2           :       1.04    5676.75      22.17       0.00     0.00   22372.08   13285.93   45517.73
00:10:04.483  Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.483  	 Nvme2n1             :       1.04    5668.34      22.14       0.00     0.00   22319.68   11260.28   45041.11
00:10:04.483  Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.483  	 Nvme2n2             :       1.04    5659.92      22.11       0.00     0.00   22308.49   11319.85   44564.48
00:10:04.483  Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.483  	 Nvme2n3             :       1.04    5651.63      22.08       0.00     0.00   22297.57   11141.12   44802.79
00:10:04.483  Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:04.483  	 Nvme3n1             :       1.04    5643.28      22.04       0.00     0.00   22283.54   10962.39   45279.42
00:10:04.483  
[2024-11-20T14:20:43.465Z]  ===================================================================================================================
00:10:04.483  
[2024-11-20T14:20:43.465Z]  Total                       :              39681.12     155.00       0.00     0.00   22342.68   10366.60   45517.73
00:10:05.417  
00:10:05.417  real	0m3.253s
00:10:05.417  user	0m2.882s
00:10:05.417  sys	0m0.242s
00:10:05.417   14:20:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:05.417   14:20:44 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:10:05.417  ************************************
00:10:05.417  END TEST bdev_write_zeroes
00:10:05.417  ************************************
00:10:05.417   14:20:44 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:05.417   14:20:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:10:05.417   14:20:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:05.417   14:20:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:05.417  ************************************
00:10:05.417  START TEST bdev_json_nonenclosed
00:10:05.417  ************************************
00:10:05.417   14:20:44 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:05.676  [2024-11-20 14:20:44.401913] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:10:05.676  [2024-11-20 14:20:44.402140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63866 ]
00:10:05.676  [2024-11-20 14:20:44.580003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.933  [2024-11-20 14:20:44.732682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:05.933  [2024-11-20 14:20:44.732811] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:10:05.933  [2024-11-20 14:20:44.732841] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:10:05.933  [2024-11-20 14:20:44.732856] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:06.191  
00:10:06.191  real	0m0.734s
00:10:06.191  user	0m0.482s
00:10:06.191  sys	0m0.144s
00:10:06.191   14:20:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:06.191   14:20:45 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:10:06.191  ************************************
00:10:06.191  END TEST bdev_json_nonenclosed
00:10:06.191  ************************************
00:10:06.191   14:20:45 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:06.191   14:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:10:06.191   14:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:06.191   14:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:06.191  ************************************
00:10:06.191  START TEST bdev_json_nonarray
00:10:06.192  ************************************
00:10:06.192   14:20:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:06.192  [2024-11-20 14:20:45.147088] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:10:06.192  [2024-11-20 14:20:45.147257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63892 ]
00:10:06.451  [2024-11-20 14:20:45.321851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:06.715  [2024-11-20 14:20:45.502020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:06.715  [2024-11-20 14:20:45.502192] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:10:06.716  [2024-11-20 14:20:45.502238] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:10:06.716  [2024-11-20 14:20:45.502263] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:06.974  
00:10:06.974  real	0m0.801s
00:10:06.974  user	0m0.574s
00:10:06.974  sys	0m0.118s
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:10:06.974  ************************************
00:10:06.974  END TEST bdev_json_nonarray
00:10:06.974  ************************************
00:10:06.974   14:20:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]]
00:10:06.974   14:20:45 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]]
00:10:06.974   14:20:45 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:10:06.974   14:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:06.974   14:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:06.974   14:20:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:06.974  ************************************
00:10:06.974  START TEST bdev_gpt_uuid
00:10:06.974  ************************************
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63922
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63922
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63922 ']'
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:06.974  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:06.974   14:20:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:07.232  [2024-11-20 14:20:46.051474] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:10:07.232  [2024-11-20 14:20:46.051739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63922 ]
00:10:07.491  [2024-11-20 14:20:46.235555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:07.491  [2024-11-20 14:20:46.398059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.425   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:08.425   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:10:08.425   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:08.425   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.425   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:08.684  Some configs were skipped because the RPC state that can call them passed over.
00:10:08.684   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.684   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine
00:10:08.684   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.684   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:08.684   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.684    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:10:08.684    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.684    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:08.943    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.943   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[
00:10:08.943  {
00:10:08.943  "name": "Nvme1n1p1",
00:10:08.943  "aliases": [
00:10:08.943  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:10:08.943  ],
00:10:08.943  "product_name": "GPT Disk",
00:10:08.943  "block_size": 4096,
00:10:08.943  "num_blocks": 655104,
00:10:08.943  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:10:08.943  "assigned_rate_limits": {
00:10:08.943  "rw_ios_per_sec": 0,
00:10:08.943  "rw_mbytes_per_sec": 0,
00:10:08.943  "r_mbytes_per_sec": 0,
00:10:08.943  "w_mbytes_per_sec": 0
00:10:08.943  },
00:10:08.943  "claimed": false,
00:10:08.943  "zoned": false,
00:10:08.943  "supported_io_types": {
00:10:08.943  "read": true,
00:10:08.943  "write": true,
00:10:08.943  "unmap": true,
00:10:08.943  "flush": true,
00:10:08.943  "reset": true,
00:10:08.943  "nvme_admin": false,
00:10:08.943  "nvme_io": false,
00:10:08.943  "nvme_io_md": false,
00:10:08.943  "write_zeroes": true,
00:10:08.943  "zcopy": false,
00:10:08.943  "get_zone_info": false,
00:10:08.943  "zone_management": false,
00:10:08.943  "zone_append": false,
00:10:08.943  "compare": true,
00:10:08.943  "compare_and_write": false,
00:10:08.943  "abort": true,
00:10:08.943  "seek_hole": false,
00:10:08.943  "seek_data": false,
00:10:08.943  "copy": true,
00:10:08.943  "nvme_iov_md": false
00:10:08.943  },
00:10:08.943  "driver_specific": {
00:10:08.943  "gpt": {
00:10:08.943  "base_bdev": "Nvme1n1",
00:10:08.944  "offset_blocks": 256,
00:10:08.944  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:10:08.944  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:10:08.944  "partition_name": "SPDK_TEST_first"
00:10:08.944  }
00:10:08.944  }
00:10:08.944  }
00:10:08.944  ]'
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length
00:10:08.944   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]]
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]'
00:10:08.944   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:10:08.944   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.944   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[
00:10:08.944  {
00:10:08.944  "name": "Nvme1n1p2",
00:10:08.944  "aliases": [
00:10:08.944  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:10:08.944  ],
00:10:08.944  "product_name": "GPT Disk",
00:10:08.944  "block_size": 4096,
00:10:08.944  "num_blocks": 655103,
00:10:08.944  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:10:08.944  "assigned_rate_limits": {
00:10:08.944  "rw_ios_per_sec": 0,
00:10:08.944  "rw_mbytes_per_sec": 0,
00:10:08.944  "r_mbytes_per_sec": 0,
00:10:08.944  "w_mbytes_per_sec": 0
00:10:08.944  },
00:10:08.944  "claimed": false,
00:10:08.944  "zoned": false,
00:10:08.944  "supported_io_types": {
00:10:08.944  "read": true,
00:10:08.944  "write": true,
00:10:08.944  "unmap": true,
00:10:08.944  "flush": true,
00:10:08.944  "reset": true,
00:10:08.944  "nvme_admin": false,
00:10:08.944  "nvme_io": false,
00:10:08.944  "nvme_io_md": false,
00:10:08.944  "write_zeroes": true,
00:10:08.944  "zcopy": false,
00:10:08.944  "get_zone_info": false,
00:10:08.944  "zone_management": false,
00:10:08.944  "zone_append": false,
00:10:08.944  "compare": true,
00:10:08.944  "compare_and_write": false,
00:10:08.944  "abort": true,
00:10:08.944  "seek_hole": false,
00:10:08.944  "seek_data": false,
00:10:08.944  "copy": true,
00:10:08.944  "nvme_iov_md": false
00:10:08.944  },
00:10:08.944  "driver_specific": {
00:10:08.944  "gpt": {
00:10:08.944  "base_bdev": "Nvme1n1",
00:10:08.944  "offset_blocks": 655360,
00:10:08.944  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:10:08.944  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:10:08.944  "partition_name": "SPDK_TEST_second"
00:10:08.944  }
00:10:08.944  }
00:10:08.944  }
00:10:08.944  ]'
00:10:08.944    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length
00:10:09.202   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]]
00:10:09.202    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]'
00:10:09.202   14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:10:09.202    14:20:47 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63922
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63922 ']'
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63922
00:10:09.202    14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:09.202    14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63922
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:09.202  killing process with pid 63922
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63922'
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63922
00:10:09.202   14:20:48 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63922
00:10:11.732  
00:10:11.732  real	0m4.288s
00:10:11.732  user	0m4.722s
00:10:11.732  sys	0m0.471s
00:10:11.732   14:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:11.732   14:20:50 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:10:11.732  ************************************
00:10:11.732  END TEST bdev_gpt_uuid
00:10:11.732  ************************************
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]]
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:10:11.732   14:20:50 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:10:11.732  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:11.732  Waiting for block devices as requested
00:10:11.732  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:10:11.991  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:10:11.991  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:10:11.991  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:10:17.287  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:10:17.287   14:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:10:17.287   14:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:10:17.546  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:10:17.546  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:10:17.546  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:10:17.546  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:10:17.546   14:20:56 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:10:17.546  
00:10:17.546  real	1m6.881s
00:10:17.546  user	1m27.762s
00:10:17.546  sys	0m9.867s
00:10:17.546   14:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:17.546   14:20:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:10:17.546  ************************************
00:10:17.546  END TEST blockdev_nvme_gpt
00:10:17.546  ************************************
00:10:17.546   14:20:56  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:10:17.546   14:20:56  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:17.546   14:20:56  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:17.546   14:20:56  -- common/autotest_common.sh@10 -- # set +x
00:10:17.546  ************************************
00:10:17.546  START TEST nvme
00:10:17.546  ************************************
00:10:17.546   14:20:56 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:10:17.546  * Looking for test storage...
00:10:17.546  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:10:17.546    14:20:56 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:17.546     14:20:56 nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:10:17.546     14:20:56 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:17.546    14:20:56 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:17.546    14:20:56 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:17.546    14:20:56 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:17.546    14:20:56 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:17.546    14:20:56 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:10:17.546    14:20:56 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:10:17.546    14:20:56 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:10:17.546    14:20:56 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:10:17.546    14:20:56 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:10:17.546    14:20:56 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:10:17.546    14:20:56 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:10:17.546    14:20:56 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:17.546    14:20:56 nvme -- scripts/common.sh@344 -- # case "$op" in
00:10:17.546    14:20:56 nvme -- scripts/common.sh@345 -- # : 1
00:10:17.546    14:20:56 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:17.546    14:20:56 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:17.547     14:20:56 nvme -- scripts/common.sh@365 -- # decimal 1
00:10:17.547     14:20:56 nvme -- scripts/common.sh@353 -- # local d=1
00:10:17.547     14:20:56 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:17.547     14:20:56 nvme -- scripts/common.sh@355 -- # echo 1
00:10:17.547    14:20:56 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:10:17.547     14:20:56 nvme -- scripts/common.sh@366 -- # decimal 2
00:10:17.547     14:20:56 nvme -- scripts/common.sh@353 -- # local d=2
00:10:17.547     14:20:56 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:17.547     14:20:56 nvme -- scripts/common.sh@355 -- # echo 2
00:10:17.547    14:20:56 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:10:17.547    14:20:56 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:17.547    14:20:56 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:17.547    14:20:56 nvme -- scripts/common.sh@368 -- # return 0
00:10:17.547    14:20:56 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:17.547    14:20:56 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:17.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:17.547  		--rc genhtml_branch_coverage=1
00:10:17.547  		--rc genhtml_function_coverage=1
00:10:17.547  		--rc genhtml_legend=1
00:10:17.547  		--rc geninfo_all_blocks=1
00:10:17.547  		--rc geninfo_unexecuted_blocks=1
00:10:17.547  		
00:10:17.547  		'
00:10:17.547    14:20:56 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:17.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:17.547  		--rc genhtml_branch_coverage=1
00:10:17.547  		--rc genhtml_function_coverage=1
00:10:17.547  		--rc genhtml_legend=1
00:10:17.547  		--rc geninfo_all_blocks=1
00:10:17.547  		--rc geninfo_unexecuted_blocks=1
00:10:17.547  		
00:10:17.547  		'
00:10:17.547    14:20:56 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:17.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:17.547  		--rc genhtml_branch_coverage=1
00:10:17.547  		--rc genhtml_function_coverage=1
00:10:17.547  		--rc genhtml_legend=1
00:10:17.547  		--rc geninfo_all_blocks=1
00:10:17.547  		--rc geninfo_unexecuted_blocks=1
00:10:17.547  		
00:10:17.547  		'
00:10:17.547    14:20:56 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:17.547  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:17.547  		--rc genhtml_branch_coverage=1
00:10:17.547  		--rc genhtml_function_coverage=1
00:10:17.547  		--rc genhtml_legend=1
00:10:17.547  		--rc geninfo_all_blocks=1
00:10:17.547  		--rc geninfo_unexecuted_blocks=1
00:10:17.547  		
00:10:17.547  		'
00:10:17.547   14:20:56 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:10:18.114  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:10:18.680  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:10:18.680  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:10:18.680  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:10:18.680  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:10:18.938    14:20:57 nvme -- nvme/nvme.sh@79 -- # uname
00:10:18.938   14:20:57 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:10:18.938   14:20:57 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:10:18.938   14:20:57 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1075 -- # stubpid=64570
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:10:18.938  Waiting for stub to ready for secondary processes...
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64570 ]]
00:10:18.938   14:20:57 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:10:18.938  [2024-11-20 14:20:57.711909] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:10:18.938  [2024-11-20 14:20:57.712074] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:10:19.871  [2024-11-20 14:20:58.529191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:19.871  [2024-11-20 14:20:58.641466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:19.871  [2024-11-20 14:20:58.641541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:19.871  [2024-11-20 14:20:58.641544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:19.871  [2024-11-20 14:20:58.660203] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:10:19.871  [2024-11-20 14:20:58.660278] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:19.871   14:20:58 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:10:19.871   14:20:58 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64570 ]]
00:10:19.871   14:20:58 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:10:19.871  [2024-11-20 14:20:58.670241] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:10:19.871  [2024-11-20 14:20:58.670401] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:10:19.871  [2024-11-20 14:20:58.673202] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:19.871  [2024-11-20 14:20:58.673431] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created
00:10:19.871  [2024-11-20 14:20:58.673543] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created
00:10:19.871  [2024-11-20 14:20:58.675649] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:19.871  [2024-11-20 14:20:58.675878] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created
00:10:19.871  [2024-11-20 14:20:58.675967] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created
00:10:19.871  [2024-11-20 14:20:58.678267] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:10:19.871  [2024-11-20 14:20:58.678515] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created
00:10:19.871  [2024-11-20 14:20:58.678624] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created
00:10:19.871  [2024-11-20 14:20:58.678680] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created
00:10:19.871  [2024-11-20 14:20:58.678726] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created
00:10:20.803   14:20:59 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:10:20.803  done.
00:10:20.803   14:20:59 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:10:20.803   14:20:59 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:10:20.803   14:20:59 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:10:20.803   14:20:59 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:20.803   14:20:59 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:20.803  ************************************
00:10:20.803  START TEST nvme_reset
00:10:20.803  ************************************
00:10:20.803   14:20:59 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:10:21.367  Initializing NVMe Controllers
00:10:21.367  Skipping QEMU NVMe SSD at 0000:00:10.0
00:10:21.367  Skipping QEMU NVMe SSD at 0000:00:11.0
00:10:21.367  Skipping QEMU NVMe SSD at 0000:00:13.0
00:10:21.367  Skipping QEMU NVMe SSD at 0000:00:12.0
00:10:21.367  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:10:21.367  
00:10:21.367  real	0m0.399s
00:10:21.367  user	0m0.175s
00:10:21.367  sys	0m0.178s
00:10:21.367   14:21:00 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:21.367   14:21:00 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:10:21.367  ************************************
00:10:21.367  END TEST nvme_reset
00:10:21.367  ************************************
00:10:21.367   14:21:00 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:10:21.367   14:21:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:21.367   14:21:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:21.367   14:21:00 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:21.367  ************************************
00:10:21.367  START TEST nvme_identify
00:10:21.367  ************************************
00:10:21.367   14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:10:21.367   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:10:21.367   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:10:21.367   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:10:21.367    14:21:00 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:10:21.367    14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:10:21.367    14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:10:21.367    14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:10:21.367     14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:10:21.367     14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:10:21.367    14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:10:21.367    14:21:00 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:10:21.367   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:10:21.627  [2024-11-20 14:21:00.488787] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64603 terminated unexpected
00:10:21.627  =====================================================
00:10:21.627  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:21.627  =====================================================
00:10:21.627  Controller Capabilities/Features
00:10:21.627  ================================
00:10:21.627  Vendor ID:                             1b36
00:10:21.627  Subsystem Vendor ID:                   1af4
00:10:21.627  Serial Number:                         12340
00:10:21.627  Model Number:                          QEMU NVMe Ctrl
00:10:21.627  Firmware Version:                      8.0.0
00:10:21.627  Recommended Arb Burst:                 6
00:10:21.627  IEEE OUI Identifier:                   00 54 52
00:10:21.627  Multi-path I/O
00:10:21.627    May have multiple subsystem ports:   No
00:10:21.627    May have multiple controllers:       No
00:10:21.627    Associated with SR-IOV VF:           No
00:10:21.627  Max Data Transfer Size:                524288
00:10:21.627  Max Number of Namespaces:              256
00:10:21.627  Max Number of I/O Queues:              64
00:10:21.627  NVMe Specification Version (VS):       1.4
00:10:21.627  NVMe Specification Version (Identify): 1.4
00:10:21.627  Maximum Queue Entries:                 2048
00:10:21.627  Contiguous Queues Required:            Yes
00:10:21.627  Arbitration Mechanisms Supported
00:10:21.627    Weighted Round Robin:                Not Supported
00:10:21.627    Vendor Specific:                     Not Supported
00:10:21.627  Reset Timeout:                         7500 ms
00:10:21.627  Doorbell Stride:                       4 bytes
00:10:21.627  NVM Subsystem Reset:                   Not Supported
00:10:21.627  Command Sets Supported
00:10:21.627    NVM Command Set:                     Supported
00:10:21.627  Boot Partition:                        Not Supported
00:10:21.627  Memory Page Size Minimum:              4096 bytes
00:10:21.627  Memory Page Size Maximum:              65536 bytes
00:10:21.627  Persistent Memory Region:              Not Supported
00:10:21.627  Optional Asynchronous Events Supported
00:10:21.627    Namespace Attribute Notices:         Supported
00:10:21.627    Firmware Activation Notices:         Not Supported
00:10:21.627    ANA Change Notices:                  Not Supported
00:10:21.627    PLE Aggregate Log Change Notices:    Not Supported
00:10:21.627    LBA Status Info Alert Notices:       Not Supported
00:10:21.627    EGE Aggregate Log Change Notices:    Not Supported
00:10:21.627    Normal NVM Subsystem Shutdown event: Not Supported
00:10:21.627    Zone Descriptor Change Notices:      Not Supported
00:10:21.627    Discovery Log Change Notices:        Not Supported
00:10:21.627  Controller Attributes
00:10:21.627    128-bit Host Identifier:             Not Supported
00:10:21.627    Non-Operational Permissive Mode:     Not Supported
00:10:21.627    NVM Sets:                            Not Supported
00:10:21.627    Read Recovery Levels:                Not Supported
00:10:21.627    Endurance Groups:                    Not Supported
00:10:21.627    Predictable Latency Mode:            Not Supported
00:10:21.627    Traffic Based Keep ALive:            Not Supported
00:10:21.627    Namespace Granularity:               Not Supported
00:10:21.627    SQ Associations:                     Not Supported
00:10:21.627    UUID List:                           Not Supported
00:10:21.627    Multi-Domain Subsystem:              Not Supported
00:10:21.627    Fixed Capacity Management:           Not Supported
00:10:21.627    Variable Capacity Management:        Not Supported
00:10:21.627    Delete Endurance Group:              Not Supported
00:10:21.627    Delete NVM Set:                      Not Supported
00:10:21.627    Extended LBA Formats Supported:      Supported
00:10:21.627    Flexible Data Placement Supported:   Not Supported
00:10:21.627  
00:10:21.627  Controller Memory Buffer Support
00:10:21.627  ================================
00:10:21.627  Supported:                             No
00:10:21.627  
00:10:21.627  Persistent Memory Region Support
00:10:21.627  ================================
00:10:21.627  Supported:                             No
00:10:21.627  
00:10:21.627  Admin Command Set Attributes
00:10:21.627  ============================
00:10:21.627  Security Send/Receive:                 Not Supported
00:10:21.627  Format NVM:                            Supported
00:10:21.627  Firmware Activate/Download:            Not Supported
00:10:21.627  Namespace Management:                  Supported
00:10:21.627  Device Self-Test:                      Not Supported
00:10:21.627  Directives:                            Supported
00:10:21.627  NVMe-MI:                               Not Supported
00:10:21.627  Virtualization Management:             Not Supported
00:10:21.627  Doorbell Buffer Config:                Supported
00:10:21.627  Get LBA Status Capability:             Not Supported
00:10:21.627  Command & Feature Lockdown Capability: Not Supported
00:10:21.627  Abort Command Limit:                   4
00:10:21.627  Async Event Request Limit:             4
00:10:21.627  Number of Firmware Slots:              N/A
00:10:21.627  Firmware Slot 1 Read-Only:             N/A
00:10:21.627  Firmware Activation Without Reset:     N/A
00:10:21.627  Multiple Update Detection Support:     N/A
00:10:21.627  Firmware Update Granularity:           No Information Provided
00:10:21.627  Per-Namespace SMART Log:               Yes
00:10:21.627  Asymmetric Namespace Access Log Page:  Not Supported
00:10:21.627  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:10:21.627  Command Effects Log Page:              Supported
00:10:21.627  Get Log Page Extended Data:            Supported
00:10:21.627  Telemetry Log Pages:                   Not Supported
00:10:21.627  Persistent Event Log Pages:            Not Supported
00:10:21.627  Supported Log Pages Log Page:          May Support
00:10:21.627  Commands Supported & Effects Log Page: Not Supported
00:10:21.627  Feature Identifiers & Effects Log Page:May Support
00:10:21.627  NVMe-MI Commands & Effects Log Page:   May Support
00:10:21.627  Data Area 4 for Telemetry Log:         Not Supported
00:10:21.627  Error Log Page Entries Supported:      1
00:10:21.627  Keep Alive:                            Not Supported
00:10:21.627  
00:10:21.627  NVM Command Set Attributes
00:10:21.627  ==========================
00:10:21.627  Submission Queue Entry Size
00:10:21.627    Max:                       64
00:10:21.627    Min:                       64
00:10:21.627  Completion Queue Entry Size
00:10:21.627    Max:                       16
00:10:21.627    Min:                       16
00:10:21.627  Number of Namespaces:        256
00:10:21.627  Compare Command:             Supported
00:10:21.627  Write Uncorrectable Command: Not Supported
00:10:21.627  Dataset Management Command:  Supported
00:10:21.627  Write Zeroes Command:        Supported
00:10:21.627  Set Features Save Field:     Supported
00:10:21.627  Reservations:                Not Supported
00:10:21.627  Timestamp:                   Supported
00:10:21.627  Copy:                        Supported
00:10:21.627  Volatile Write Cache:        Present
00:10:21.627  Atomic Write Unit (Normal):  1
00:10:21.627  Atomic Write Unit (PFail):   1
00:10:21.627  Atomic Compare & Write Unit: 1
00:10:21.627  Fused Compare & Write:       Not Supported
00:10:21.627  Scatter-Gather List
00:10:21.627    SGL Command Set:           Supported
00:10:21.627    SGL Keyed:                 Not Supported
00:10:21.627    SGL Bit Bucket Descriptor: Not Supported
00:10:21.627    SGL Metadata Pointer:      Not Supported
00:10:21.627    Oversized SGL:             Not Supported
00:10:21.627    SGL Metadata Address:      Not Supported
00:10:21.627    SGL Offset:                Not Supported
00:10:21.627    Transport SGL Data Block:  Not Supported
00:10:21.627  Replay Protected Memory Block:  Not Supported
00:10:21.627  
00:10:21.627  Firmware Slot Information
00:10:21.627  =========================
00:10:21.627  Active slot:                 1
00:10:21.627  Slot 1 Firmware Revision:    1.0
00:10:21.627  
00:10:21.627  
00:10:21.627  Commands Supported and Effects
00:10:21.627  ==============================
00:10:21.627  Admin Commands
00:10:21.627  --------------
00:10:21.627     Delete I/O Submission Queue (00h): Supported 
00:10:21.627     Create I/O Submission Queue (01h): Supported 
00:10:21.627                    Get Log Page (02h): Supported 
00:10:21.627     Delete I/O Completion Queue (04h): Supported 
00:10:21.627     Create I/O Completion Queue (05h): Supported 
00:10:21.627                        Identify (06h): Supported 
00:10:21.627                           Abort (08h): Supported 
00:10:21.627                    Set Features (09h): Supported 
00:10:21.627                    Get Features (0Ah): Supported 
00:10:21.627      Asynchronous Event Request (0Ch): Supported 
00:10:21.627            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:21.627                  Directive Send (19h): Supported 
00:10:21.627               Directive Receive (1Ah): Supported 
00:10:21.628       Virtualization Management (1Ch): Supported 
00:10:21.628          Doorbell Buffer Config (7Ch): Supported 
00:10:21.628                      Format NVM (80h): Supported LBA-Change 
00:10:21.628  I/O Commands
00:10:21.628  ------------
00:10:21.628                           Flush (00h): Supported LBA-Change 
00:10:21.628                           Write (01h): Supported LBA-Change 
00:10:21.628                            Read (02h): Supported 
00:10:21.628                         Compare (05h): Supported 
00:10:21.628                    Write Zeroes (08h): Supported LBA-Change 
00:10:21.628              Dataset Management (09h): Supported LBA-Change 
00:10:21.628                         Unknown (0Ch): Supported 
00:10:21.628                         Unknown (12h): Supported 
00:10:21.628                            Copy (19h): Supported LBA-Change 
00:10:21.628                         Unknown (1Dh): Supported LBA-Change 
00:10:21.628  
00:10:21.628  Error Log
00:10:21.628  =========
00:10:21.628  
00:10:21.628  Arbitration
00:10:21.628  ===========
00:10:21.628  Arbitration Burst:           no limit
00:10:21.628  
00:10:21.628  Power Management
00:10:21.628  ================
00:10:21.628  Number of Power States:          1
00:10:21.628  Current Power State:             Power State #0
00:10:21.628  Power State #0:
00:10:21.628    Max Power:                     25.00 W
00:10:21.628    Non-Operational State:         Operational
00:10:21.628    Entry Latency:                 16 microseconds
00:10:21.628    Exit Latency:                  4 microseconds
00:10:21.628    Relative Read Throughput:      0
00:10:21.628    Relative Read Latency:         0
00:10:21.628    Relative Write Throughput:     0
00:10:21.628    Relative Write Latency:        0
00:10:21.628    Idle Power[2024-11-20 14:21:00.490252] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64603 terminated unexpected
00:10:21.628  :                     Not Reported
00:10:21.628    Active Power:                   Not Reported
00:10:21.628  Non-Operational Permissive Mode: Not Supported
00:10:21.628  
00:10:21.628  Health Information
00:10:21.628  ==================
00:10:21.628  Critical Warnings:
00:10:21.628    Available Spare Space:     OK
00:10:21.628    Temperature:               OK
00:10:21.628    Device Reliability:        OK
00:10:21.628    Read Only:                 No
00:10:21.628    Volatile Memory Backup:    OK
00:10:21.628  Current Temperature:         323 Kelvin (50 Celsius)
00:10:21.628  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:21.628  Available Spare:             0%
00:10:21.628  Available Spare Threshold:   0%
00:10:21.628  Life Percentage Used:        0%
00:10:21.628  Data Units Read:             669
00:10:21.628  Data Units Written:          597
00:10:21.628  Host Read Commands:          34228
00:10:21.628  Host Write Commands:         34014
00:10:21.628  Controller Busy Time:        0 minutes
00:10:21.628  Power Cycles:                0
00:10:21.628  Power On Hours:              0 hours
00:10:21.628  Unsafe Shutdowns:            0
00:10:21.628  Unrecoverable Media Errors:  0
00:10:21.628  Lifetime Error Log Entries:  0
00:10:21.628  Warning Temperature Time:    0 minutes
00:10:21.628  Critical Temperature Time:   0 minutes
00:10:21.628  
00:10:21.628  Number of Queues
00:10:21.628  ================
00:10:21.628  Number of I/O Submission Queues:      64
00:10:21.628  Number of I/O Completion Queues:      64
00:10:21.628  
00:10:21.628  ZNS Specific Controller Data
00:10:21.628  ============================
00:10:21.628  Zone Append Size Limit:      0
00:10:21.628  
00:10:21.628  
00:10:21.628  Active Namespaces
00:10:21.628  =================
00:10:21.628  Namespace ID:1
00:10:21.628  Error Recovery Timeout:                Unlimited
00:10:21.628  Command Set Identifier:                NVM (00h)
00:10:21.628  Deallocate:                            Supported
00:10:21.628  Deallocated/Unwritten Error:           Supported
00:10:21.628  Deallocated Read Value:                All 0x00
00:10:21.628  Deallocate in Write Zeroes:            Not Supported
00:10:21.628  Deallocated Guard Field:               0xFFFF
00:10:21.628  Flush:                                 Supported
00:10:21.628  Reservation:                           Not Supported
00:10:21.628  Metadata Transferred as:               Separate Metadata Buffer
00:10:21.628  Namespace Sharing Capabilities:        Private
00:10:21.628  Size (in LBAs):                        1548666 (5GiB)
00:10:21.628  Capacity (in LBAs):                    1548666 (5GiB)
00:10:21.628  Utilization (in LBAs):                 1548666 (5GiB)
00:10:21.628  Thin Provisioning:                     Not Supported
00:10:21.628  Per-NS Atomic Units:                   No
00:10:21.628  Maximum Single Source Range Length:    128
00:10:21.628  Maximum Copy Length:                   128
00:10:21.628  Maximum Source Range Count:            128
00:10:21.628  NGUID/EUI64 Never Reused:              No
00:10:21.628  Namespace Write Protected:             No
00:10:21.628  Number of LBA Formats:                 8
00:10:21.628  Current LBA Format:                    LBA Format #07
00:10:21.628  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:21.628  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:21.628  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:21.628  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:21.628  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:21.628  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:21.628  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:21.628  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:21.628  
00:10:21.628  NVM Specific Namespace Data
00:10:21.628  ===========================
00:10:21.628  Logical Block Storage Tag Mask:               0
00:10:21.628  Protection Information Capabilities:
00:10:21.628    16b Guard Protection Information Storage Tag Support:  No
00:10:21.628    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:21.628    Storage Tag Check Read Support:                        No
00:10:21.628  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.628  =====================================================
00:10:21.628  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:21.628  =====================================================
00:10:21.628  Controller Capabilities/Features
00:10:21.628  ================================
00:10:21.628  Vendor ID:                             1b36
00:10:21.628  Subsystem Vendor ID:                   1af4
00:10:21.628  Serial Number:                         12341
00:10:21.628  Model Number:                          QEMU NVMe Ctrl
00:10:21.628  Firmware Version:                      8.0.0
00:10:21.628  Recommended Arb Burst:                 6
00:10:21.628  IEEE OUI Identifier:                   00 54 52
00:10:21.628  Multi-path I/O
00:10:21.628    May have multiple subsystem ports:   No
00:10:21.628    May have multiple controllers:       No
00:10:21.628    Associated with SR-IOV VF:           No
00:10:21.628  Max Data Transfer Size:                524288
00:10:21.628  Max Number of Namespaces:              256
00:10:21.628  Max Number of I/O Queues:              64
00:10:21.628  NVMe Specification Version (VS):       1.4
00:10:21.628  NVMe Specification Version (Identify): 1.4
00:10:21.628  Maximum Queue Entries:                 2048
00:10:21.628  Contiguous Queues Required:            Yes
00:10:21.628  Arbitration Mechanisms Supported
00:10:21.628    Weighted Round Robin:                Not Supported
00:10:21.628    Vendor Specific:                     Not Supported
00:10:21.628  Reset Timeout:                         7500 ms
00:10:21.628  Doorbell Stride:                       4 bytes
00:10:21.628  NVM Subsystem Reset:                   Not Supported
00:10:21.628  Command Sets Supported
00:10:21.628    NVM Command Set:                     Supported
00:10:21.628  Boot Partition:                        Not Supported
00:10:21.628  Memory Page Size Minimum:              4096 bytes
00:10:21.628  Memory Page Size Maximum:              65536 bytes
00:10:21.628  Persistent Memory Region:              Not Supported
00:10:21.628  Optional Asynchronous Events Supported
00:10:21.628    Namespace Attribute Notices:         Supported
00:10:21.628    Firmware Activation Notices:         Not Supported
00:10:21.628    ANA Change Notices:                  Not Supported
00:10:21.628    PLE Aggregate Log Change Notices:    Not Supported
00:10:21.628    LBA Status Info Alert Notices:       Not Supported
00:10:21.628    EGE Aggregate Log Change Notices:    Not Supported
00:10:21.628    Normal NVM Subsystem Shutdown event: Not Supported
00:10:21.628    Zone Descriptor Change Notices:      Not Supported
00:10:21.628    Discovery Log Change Notices:        Not Supported
00:10:21.628  Controller Attributes
00:10:21.628    128-bit Host Identifier:             Not Supported
00:10:21.628    Non-Operational Permissive Mode:     Not Supported
00:10:21.628    NVM Sets:                            Not Supported
00:10:21.628    Read Recovery Levels:                Not Supported
00:10:21.628    Endurance Groups:                    Not Supported
00:10:21.628    Predictable Latency Mode:            Not Supported
00:10:21.628    Traffic Based Keep ALive:            Not Supported
00:10:21.628    Namespace Granularity:               Not Supported
00:10:21.628    SQ Associations:                     Not Supported
00:10:21.628    UUID List:                           Not Supported
00:10:21.628    Multi-Domain Subsystem:              Not Supported
00:10:21.628    Fixed Capacity Management:           Not Supported
00:10:21.628    Variable Capacity Management:        Not Supported
00:10:21.628    Delete Endurance Group:              Not Supported
00:10:21.628    Delete NVM Set:                      Not Supported
00:10:21.628    Extended LBA Formats Supported:      Supported
00:10:21.628    Flexible Data Placement Supported:   Not Supported
00:10:21.628  
00:10:21.628  Controller Memory Buffer Support
00:10:21.628  ================================
00:10:21.628  Supported:                             No
00:10:21.628  
00:10:21.628  Persistent Memory Region Support
00:10:21.628  ================================
00:10:21.628  Supported:                             No
00:10:21.628  
00:10:21.628  Admin Command Set Attributes
00:10:21.628  ============================
00:10:21.628  Security Send/Receive:                 Not Supported
00:10:21.628  Format NVM:                            Supported
00:10:21.628  Firmware Activate/Download:            Not Supported
00:10:21.628  Namespace Management:                  Supported
00:10:21.628  Device Self-Test:                      Not Supported
00:10:21.628  Directives:                            Supported
00:10:21.628  NVMe-MI:                               Not Supported
00:10:21.628  Virtualization Management:             Not Supported
00:10:21.628  Doorbell Buffer Config:                Supported
00:10:21.628  Get LBA Status Capability:             Not Supported
00:10:21.628  Command & Feature Lockdown Capability: Not Supported
00:10:21.628  Abort Command Limit:                   4
00:10:21.628  Async Event Request Limit:             4
00:10:21.628  Number of Firmware Slots:              N/A
00:10:21.628  Firmware Slot 1 Read-Only:             N/A
00:10:21.628  Firmware Activation Without Reset:     N/A
00:10:21.628  Multiple Update Detection Support:     N/A
00:10:21.628  Firmware Update Granularity:           No Information Provided
00:10:21.628  Per-Namespace SMART Log:               Yes
00:10:21.628  Asymmetric Namespace Access Log Page:  Not Supported
00:10:21.628  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:10:21.628  Command Effects Log Page:              Supported
00:10:21.628  Get Log Page Extended Data:            Supported
00:10:21.628  Telemetry Log Pages:                   Not Supported
00:10:21.628  Persistent Event Log Pages:            Not Supported
00:10:21.628  Supported Log Pages Log Page:          May Support
00:10:21.628  Commands Supported & Effects Log Page: Not Supported
00:10:21.628  Feature Identifiers & Effects Log Page:May Support
00:10:21.628  NVMe-MI Commands & Effects Log Page:   May Support
00:10:21.628  Data Area 4 for Telemetry Log:         Not Supported
00:10:21.628  Error Log Page Entries Supported:      1
00:10:21.628  Keep Alive:                            Not Supported
00:10:21.628  
00:10:21.628  NVM Command Set Attributes
00:10:21.628  ==========================
00:10:21.628  Submission Queue Entry Size
00:10:21.628    Max:                       64
00:10:21.628    Min:                       64
00:10:21.628  Completion Queue Entry Size
00:10:21.628    Max:                       16
00:10:21.628    Min:                       16
00:10:21.628  Number of Namespaces:        256
00:10:21.628  Compare Command:             Supported
00:10:21.628  Write Uncorrectable Command: Not Supported
00:10:21.628  Dataset Management Command:  Supported
00:10:21.628  Write Zeroes Command:        Supported
00:10:21.628  Set Features Save Field:     Supported
00:10:21.628  Reservations:                Not Supported
00:10:21.628  Timestamp:                   Supported
00:10:21.628  Copy:                        Supported
00:10:21.628  Volatile Write Cache:        Present
00:10:21.628  Atomic Write Unit (Normal):  1
00:10:21.628  Atomic Write Unit (PFail):   1
00:10:21.628  Atomic Compare & Write Unit: 1
00:10:21.628  Fused Compare & Write:       Not Supported
00:10:21.628  Scatter-Gather List
00:10:21.628    SGL Command Set:           Supported
00:10:21.628    SGL Keyed:                 Not Supported
00:10:21.628    SGL Bit Bucket Descriptor: Not Supported
00:10:21.628    SGL Metadata Pointer:      Not Supported
00:10:21.628    Oversized SGL:             Not Supported
00:10:21.628    SGL Metadata Address:      Not Supported
00:10:21.628    SGL Offset:                Not Supported
00:10:21.628    Transport SGL Data Block:  Not Supported
00:10:21.628  Replay Protected Memory Block:  Not Supported
00:10:21.628  
00:10:21.628  Firmware Slot Information
00:10:21.628  =========================
00:10:21.628  Active slot:                 1
00:10:21.628  Slot 1 Firmware Revision:    1.0
00:10:21.628  
00:10:21.628  
00:10:21.628  Commands Supported and Effects
00:10:21.628  ==============================
00:10:21.628  Admin Commands
00:10:21.628  --------------
00:10:21.628     Delete I/O Submission Queue (00h): Supported 
00:10:21.628     Create I/O Submission Queue (01h): Supported 
00:10:21.628                    Get Log Page (02h): Supported 
00:10:21.628     Delete I/O Completion Queue (04h): Supported 
00:10:21.628     Create I/O Completion Queue (05h): Supported 
00:10:21.628                        Identify (06h): Supported 
00:10:21.628                           Abort (08h): Supported 
00:10:21.628                    Set Features (09h): Supported 
00:10:21.628                    Get Features (0Ah): Supported 
00:10:21.628      Asynchronous Event Request (0Ch): Supported 
00:10:21.628            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:21.628                  Directive Send (19h): Supported 
00:10:21.628               Directive Receive (1Ah): Supported 
00:10:21.628       Virtualization Management (1Ch): Supported 
00:10:21.628          Doorbell Buffer Config (7Ch): Supported 
00:10:21.628                      Format NVM (80h): Supported LBA-Change 
00:10:21.628  I/O Commands
00:10:21.628  ------------
00:10:21.628                           Flush (00h): Supported LBA-Change 
00:10:21.628                           Write (01h): Supported LBA-Change 
00:10:21.628                            Read (02h): Supported 
00:10:21.628                         Compare (05h): Supported 
00:10:21.628                    Write Zeroes (08h): Supported LBA-Change 
00:10:21.628              Dataset Management (09h): Supported LBA-Change 
00:10:21.628                         Unknown (0Ch): Supported 
00:10:21.628                         Unknown (12h): Supported 
00:10:21.628                            Copy (19h): Supported LBA-Change 
00:10:21.629                         Unknown (1Dh): Supported LBA-Change 
00:10:21.629  
00:10:21.629  Error Log
00:10:21.629  =========
00:10:21.629  
00:10:21.629  Arbitration
00:10:21.629  ===========
00:10:21.629  Arbitration Burst:           no limit
00:10:21.629  
00:10:21.629  Power Management
00:10:21.629  ================
00:10:21.629  Number of Power States:          1
00:10:21.629  Current Power State:             Power State #0
00:10:21.629  Power State #0:
00:10:21.629    Max Power:                     25.00 W
00:10:21.629    Non-Operational State:         Operational
00:10:21.629    Entry Latency:                 16 microseconds
00:10:21.629    Exit Latency:                  4 microseconds
00:10:21.629    Relative Read Throughput:      0
00:10:21.629    Relative Read Latency:         0
00:10:21.629    Relative Write Throughput:     0
00:10:21.629    Relative Write Latency:        0
00:10:21.629    Idle Power:                     Not Reported
00:10:21.629    Active Power:                   Not Reported
00:10:21.629  Non-Operational Permissive Mode: Not Supported
00:10:21.629  
00:10:21.629  Health Information
00:10:21.629  ==================
00:10:21.629  Critical Warnings:
00:10:21.629    Available Spare Space:     OK
00:10:21.629    Temperature:      [2024-11-20 14:21:00.491288] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64603 terminated unexpected
00:10:21.629           OK
00:10:21.629    Device Reliability:        OK
00:10:21.629    Read Only:                 No
00:10:21.629    Volatile Memory Backup:    OK
00:10:21.629  Current Temperature:         323 Kelvin (50 Celsius)
00:10:21.629  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:21.629  Available Spare:             0%
00:10:21.629  Available Spare Threshold:   0%
00:10:21.629  Life Percentage Used:        0%
00:10:21.629  Data Units Read:             1028
00:10:21.629  Data Units Written:          896
00:10:21.629  Host Read Commands:          50944
00:10:21.629  Host Write Commands:         49729
00:10:21.629  Controller Busy Time:        0 minutes
00:10:21.629  Power Cycles:                0
00:10:21.629  Power On Hours:              0 hours
00:10:21.629  Unsafe Shutdowns:            0
00:10:21.629  Unrecoverable Media Errors:  0
00:10:21.629  Lifetime Error Log Entries:  0
00:10:21.629  Warning Temperature Time:    0 minutes
00:10:21.629  Critical Temperature Time:   0 minutes
00:10:21.629  
00:10:21.629  Number of Queues
00:10:21.629  ================
00:10:21.629  Number of I/O Submission Queues:      64
00:10:21.629  Number of I/O Completion Queues:      64
00:10:21.629  
00:10:21.629  ZNS Specific Controller Data
00:10:21.629  ============================
00:10:21.629  Zone Append Size Limit:      0
00:10:21.629  
00:10:21.629  
00:10:21.629  Active Namespaces
00:10:21.629  =================
00:10:21.629  Namespace ID:1
00:10:21.629  Error Recovery Timeout:                Unlimited
00:10:21.629  Command Set Identifier:                NVM (00h)
00:10:21.629  Deallocate:                            Supported
00:10:21.629  Deallocated/Unwritten Error:           Supported
00:10:21.629  Deallocated Read Value:                All 0x00
00:10:21.629  Deallocate in Write Zeroes:            Not Supported
00:10:21.629  Deallocated Guard Field:               0xFFFF
00:10:21.629  Flush:                                 Supported
00:10:21.629  Reservation:                           Not Supported
00:10:21.629  Namespace Sharing Capabilities:        Private
00:10:21.629  Size (in LBAs):                        1310720 (5GiB)
00:10:21.629  Capacity (in LBAs):                    1310720 (5GiB)
00:10:21.629  Utilization (in LBAs):                 1310720 (5GiB)
00:10:21.629  Thin Provisioning:                     Not Supported
00:10:21.629  Per-NS Atomic Units:                   No
00:10:21.629  Maximum Single Source Range Length:    128
00:10:21.629  Maximum Copy Length:                   128
00:10:21.629  Maximum Source Range Count:            128
00:10:21.629  NGUID/EUI64 Never Reused:              No
00:10:21.629  Namespace Write Protected:             No
00:10:21.629  Number of LBA Formats:                 8
00:10:21.629  Current LBA Format:                    LBA Format #04
00:10:21.629  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:21.629  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:21.629  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:21.629  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:21.629  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:21.629  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:21.629  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:21.629  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:21.629  
00:10:21.629  NVM Specific Namespace Data
00:10:21.629  ===========================
00:10:21.629  Logical Block Storage Tag Mask:               0
00:10:21.629  Protection Information Capabilities:
00:10:21.629    16b Guard Protection Information Storage Tag Support:  No
00:10:21.629    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:21.629    Storage Tag Check Read Support:                        No
00:10:21.629  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.629  =====================================================
00:10:21.629  NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:21.629  =====================================================
00:10:21.629  Controller Capabilities/Features
00:10:21.629  ================================
00:10:21.629  Vendor ID:                             1b36
00:10:21.629  Subsystem Vendor ID:                   1af4
00:10:21.629  Serial Number:                         12343
00:10:21.629  Model Number:                          QEMU NVMe Ctrl
00:10:21.629  Firmware Version:                      8.0.0
00:10:21.629  Recommended Arb Burst:                 6
00:10:21.629  IEEE OUI Identifier:                   00 54 52
00:10:21.629  Multi-path I/O
00:10:21.629    May have multiple subsystem ports:   No
00:10:21.629    May have multiple controllers:       Yes
00:10:21.629    Associated with SR-IOV VF:           No
00:10:21.629  Max Data Transfer Size:                524288
00:10:21.629  Max Number of Namespaces:              256
00:10:21.629  Max Number of I/O Queues:              64
00:10:21.629  NVMe Specification Version (VS):       1.4
00:10:21.629  NVMe Specification Version (Identify): 1.4
00:10:21.629  Maximum Queue Entries:                 2048
00:10:21.629  Contiguous Queues Required:            Yes
00:10:21.629  Arbitration Mechanisms Supported
00:10:21.629    Weighted Round Robin:                Not Supported
00:10:21.629    Vendor Specific:                     Not Supported
00:10:21.629  Reset Timeout:                         7500 ms
00:10:21.629  Doorbell Stride:                       4 bytes
00:10:21.629  NVM Subsystem Reset:                   Not Supported
00:10:21.629  Command Sets Supported
00:10:21.629    NVM Command Set:                     Supported
00:10:21.629  Boot Partition:                        Not Supported
00:10:21.629  Memory Page Size Minimum:              4096 bytes
00:10:21.629  Memory Page Size Maximum:              65536 bytes
00:10:21.629  Persistent Memory Region:              Not Supported
00:10:21.629  Optional Asynchronous Events Supported
00:10:21.629    Namespace Attribute Notices:         Supported
00:10:21.629    Firmware Activation Notices:         Not Supported
00:10:21.629    ANA Change Notices:                  Not Supported
00:10:21.629    PLE Aggregate Log Change Notices:    Not Supported
00:10:21.629    LBA Status Info Alert Notices:       Not Supported
00:10:21.629    EGE Aggregate Log Change Notices:    Not Supported
00:10:21.629    Normal NVM Subsystem Shutdown event: Not Supported
00:10:21.629    Zone Descriptor Change Notices:      Not Supported
00:10:21.629    Discovery Log Change Notices:        Not Supported
00:10:21.629  Controller Attributes
00:10:21.629    128-bit Host Identifier:             Not Supported
00:10:21.629    Non-Operational Permissive Mode:     Not Supported
00:10:21.629    NVM Sets:                            Not Supported
00:10:21.629    Read Recovery Levels:                Not Supported
00:10:21.629    Endurance Groups:                    Supported
00:10:21.629    Predictable Latency Mode:            Not Supported
00:10:21.629    Traffic Based Keep ALive:            Not Supported
00:10:21.629    Namespace Granularity:               Not Supported
00:10:21.629    SQ Associations:                     Not Supported
00:10:21.629    UUID List:                           Not Supported
00:10:21.629    Multi-Domain Subsystem:              Not Supported
00:10:21.629    Fixed Capacity Management:           Not Supported
00:10:21.629    Variable Capacity Management:        Not Supported
00:10:21.629    Delete Endurance Group:              Not Supported
00:10:21.629    Delete NVM Set:                      Not Supported
00:10:21.629    Extended LBA Formats Supported:      Supported
00:10:21.629    Flexible Data Placement Supported:   Supported
00:10:21.629  
00:10:21.629  Controller Memory Buffer Support
00:10:21.629  ================================
00:10:21.629  Supported:                             No
00:10:21.629  
00:10:21.629  Persistent Memory Region Support
00:10:21.629  ================================
00:10:21.629  Supported:                             No
00:10:21.629  
00:10:21.629  Admin Command Set Attributes
00:10:21.629  ============================
00:10:21.629  Security Send/Receive:                 Not Supported
00:10:21.629  Format NVM:                            Supported
00:10:21.629  Firmware Activate/Download:            Not Supported
00:10:21.629  Namespace Management:                  Supported
00:10:21.629  Device Self-Test:                      Not Supported
00:10:21.629  Directives:                            Supported
00:10:21.629  NVMe-MI:                               Not Supported
00:10:21.629  Virtualization Management:             Not Supported
00:10:21.629  Doorbell Buffer Config:                Supported
00:10:21.629  Get LBA Status Capability:             Not Supported
00:10:21.629  Command & Feature Lockdown Capability: Not Supported
00:10:21.629  Abort Command Limit:                   4
00:10:21.629  Async Event Request Limit:             4
00:10:21.629  Number of Firmware Slots:              N/A
00:10:21.629  Firmware Slot 1 Read-Only:             N/A
00:10:21.629  Firmware Activation Without Reset:     N/A
00:10:21.629  Multiple Update Detection Support:     N/A
00:10:21.629  Firmware Update Granularity:           No Information Provided
00:10:21.629  Per-Namespace SMART Log:               Yes
00:10:21.629  Asymmetric Namespace Access Log Page:  Not Supported
00:10:21.629  Subsystem NQN:                         nqn.2019-08.org.qemu:fdp-subsys3
00:10:21.629  Command Effects Log Page:              Supported
00:10:21.629  Get Log Page Extended Data:            Supported
00:10:21.629  Telemetry Log Pages:                   Not Supported
00:10:21.629  Persistent Event Log Pages:            Not Supported
00:10:21.629  Supported Log Pages Log Page:          May Support
00:10:21.629  Commands Supported & Effects Log Page: Not Supported
00:10:21.629  Feature Identifiers & Effects Log Page:May Support
00:10:21.629  NVMe-MI Commands & Effects Log Page:   May Support
00:10:21.629  Data Area 4 for Telemetry Log:         Not Supported
00:10:21.629  Error Log Page Entries Supported:      1
00:10:21.629  Keep Alive:                            Not Supported
00:10:21.629  
00:10:21.629  NVM Command Set Attributes
00:10:21.629  ==========================
00:10:21.629  Submission Queue Entry Size
00:10:21.629    Max:                       64
00:10:21.629    Min:                       64
00:10:21.629  Completion Queue Entry Size
00:10:21.629    Max:                       16
00:10:21.629    Min:                       16
00:10:21.629  Number of Namespaces:        256
00:10:21.629  Compare Command:             Supported
00:10:21.629  Write Uncorrectable Command: Not Supported
00:10:21.629  Dataset Management Command:  Supported
00:10:21.629  Write Zeroes Command:        Supported
00:10:21.629  Set Features Save Field:     Supported
00:10:21.629  Reservations:                Not Supported
00:10:21.629  Timestamp:                   Supported
00:10:21.629  Copy:                        Supported
00:10:21.629  Volatile Write Cache:        Present
00:10:21.629  Atomic Write Unit (Normal):  1
00:10:21.629  Atomic Write Unit (PFail):   1
00:10:21.629  Atomic Compare & Write Unit: 1
00:10:21.629  Fused Compare & Write:       Not Supported
00:10:21.629  Scatter-Gather List
00:10:21.629    SGL Command Set:           Supported
00:10:21.629    SGL Keyed:                 Not Supported
00:10:21.629    SGL Bit Bucket Descriptor: Not Supported
00:10:21.629    SGL Metadata Pointer:      Not Supported
00:10:21.629    Oversized SGL:             Not Supported
00:10:21.629    SGL Metadata Address:      Not Supported
00:10:21.629    SGL Offset:                Not Supported
00:10:21.629    Transport SGL Data Block:  Not Supported
00:10:21.629  Replay Protected Memory Block:  Not Supported
00:10:21.629  
00:10:21.629  Firmware Slot Information
00:10:21.629  =========================
00:10:21.629  Active slot:                 1
00:10:21.629  Slot 1 Firmware Revision:    1.0
00:10:21.629  
00:10:21.629  
00:10:21.629  Commands Supported and Effects
00:10:21.629  ==============================
00:10:21.629  Admin Commands
00:10:21.629  --------------
00:10:21.629     Delete I/O Submission Queue (00h): Supported 
00:10:21.629     Create I/O Submission Queue (01h): Supported 
00:10:21.629                    Get Log Page (02h): Supported 
00:10:21.629     Delete I/O Completion Queue (04h): Supported 
00:10:21.629     Create I/O Completion Queue (05h): Supported 
00:10:21.629                        Identify (06h): Supported 
00:10:21.629                           Abort (08h): Supported 
00:10:21.629                    Set Features (09h): Supported 
00:10:21.629                    Get Features (0Ah): Supported 
00:10:21.629      Asynchronous Event Request (0Ch): Supported 
00:10:21.629            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:21.629                  Directive Send (19h): Supported 
00:10:21.629               Directive Receive (1Ah): Supported 
00:10:21.629       Virtualization Management (1Ch): Supported 
00:10:21.629          Doorbell Buffer Config (7Ch): Supported 
00:10:21.629                      Format NVM (80h): Supported LBA-Change 
00:10:21.629  I/O Commands
00:10:21.629  ------------
00:10:21.629                           Flush (00h): Supported LBA-Change 
00:10:21.629                           Write (01h): Supported LBA-Change 
00:10:21.629                            Read (02h): Supported 
00:10:21.629                         Compare (05h): Supported 
00:10:21.629                    Write Zeroes (08h): Supported LBA-Change 
00:10:21.629              Dataset Management (09h): Supported LBA-Change 
00:10:21.629                         Unknown (0Ch): Supported 
00:10:21.629                         Unknown (12h): Supported 
00:10:21.629                            Copy (19h): Supported LBA-Change 
00:10:21.629                         Unknown (1Dh): Supported LBA-Change 
00:10:21.629  
00:10:21.629  Error Log
00:10:21.629  =========
00:10:21.629  
00:10:21.629  Arbitration
00:10:21.629  ===========
00:10:21.629  Arbitration Burst:           no limit
00:10:21.629  
00:10:21.629  Power Management
00:10:21.629  ================
00:10:21.629  Number of Power States:          1
00:10:21.629  Current Power State:             Power State #0
00:10:21.629  Power State #0:
00:10:21.629    Max Power:                     25.00 W
00:10:21.629    Non-Operational State:         Operational
00:10:21.629    Entry Latency:                 16 microseconds
00:10:21.629    Exit Latency:                  4 microseconds
00:10:21.629    Relative Read Throughput:      0
00:10:21.629    Relative Read Latency:         0
00:10:21.629    Relative Write Throughput:     0
00:10:21.630    Relative Write Latency:        0
00:10:21.630    Idle Power:                     Not Reported
00:10:21.630    Active Power:                   Not Reported
00:10:21.630  Non-Operational Permissive Mode: Not Supported
00:10:21.630  
00:10:21.630  Health Information
00:10:21.630  ==================
00:10:21.630  Critical Warnings:
00:10:21.630    Available Spare Space:     OK
00:10:21.630    Temperature:               OK
00:10:21.630    Device Reliability:        OK
00:10:21.630    Read Only:                 No
00:10:21.630    Volatile Memory Backup:    OK
00:10:21.630  Current Temperature:         323 Kelvin (50 Celsius)
00:10:21.630  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:21.630  Available Spare:             0%
00:10:21.630  Available Spare Threshold:   0%
00:10:21.630  Life Percentage Used:        0%
00:10:21.630  Data Units Read:             750
00:10:21.630  Data Units Written:          679
00:10:21.630  Host Read Commands:          35099
00:10:21.630  Host Write Commands:         34522
00:10:21.630  Controller Busy Time:        0 minutes
00:10:21.630  Power Cycles:                0
00:10:21.630  Power On Hours:              0 hours
00:10:21.630  Unsafe Shutdowns:            0
00:10:21.630  Unrecoverable Media Errors:  0
00:10:21.630  Lifetime Error Log Entries:  0
00:10:21.630  Warning Temperature Time:    0 minutes
00:10:21.630  Critical Temperature Time:   0 minutes
00:10:21.630  
00:10:21.630  Number of Queues
00:10:21.630  ================
00:10:21.630  Number of I/O Submission Queues:      64
00:10:21.630  Number of I/O Completion Queues:      64
00:10:21.630  
00:10:21.630  ZNS Specific Controller Data
00:10:21.630  ============================
00:10:21.630  Zone Append Size Limit:      0
00:10:21.630  
00:10:21.630  
00:10:21.630  Active Namespaces
00:10:21.630  =================
00:10:21.630  Namespace ID:1
00:10:21.630  Error Recovery Timeout:                Unlimited
00:10:21.630  Command Set Identifier:                NVM (00h)
00:10:21.630  Deallocate:                            Supported
00:10:21.630  Deallocated/Unwritten Error:           Supported
00:10:21.630  Deallocated Read Value:                All 0x00
00:10:21.630  Deallocate in Write Zeroes:            Not Supported
00:10:21.630  Deallocated Guard Field:               0xFFFF
00:10:21.630  Flush:                                 Supported
00:10:21.630  Reservation:                           Not Supported
00:10:21.630  Namespace Sharing Capabilities:        Multiple Controllers
00:10:21.630  Size (in LBAs):                        262144 (1GiB)
00:10:21.630  Capacity (in LBAs):                    262144 (1GiB)
00:10:21.630  Utilization (in LBAs):                 262144 (1GiB)
00:10:21.630  Thin Provisioning:                     Not Supported
00:10:21.630  Per-NS Atomic Units:                   No
00:10:21.630  Maximum Single Source Range Length:    128
00:10:21.630  Maximum Copy Length:                   128
00:10:21.630  Maximum Source Range Count:            128
00:10:21.630  NGUID/EUI64 Never Reused:              No
00:10:21.630  Namespace Write Protected:             No
00:10:21.630  Endurance group ID:                    1
00:10:21.630  Number of LBA Formats:                 8
00:10:21.630  Current LBA Format:                    LBA Format #04
00:10:21.630  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:21.630  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:21.630  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:21.630  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:21.630  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:21.630  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:21.630  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:21.630  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:21.630  
00:10:21.630  Get Feature FDP:
00:10:21.630  ================
00:10:21.630    Enabled:                 Yes
00:10:21.630    FDP configuration index: 0
00:10:21.630  
00:10:21.630  FDP configurations log page
00:10:21.630  ===========================
00:10:21.630  Number of FDP configurations:         1
00:10:21.630  Version:                              0
00:10:21.630  Size:                                 112
00:10:21.630  FDP Configuration Descriptor:         0
00:10:21.630    Descriptor Size:                    96
00:10:21.630    Reclaim Group Identifier format:    2
00:10:21.630    FDP Volatile Write Cache:           Not Present
00:10:21.630    FDP Configuration:                  Valid
00:10:21.630    Vendor Specific Size:               0
00:10:21.630    Number of Reclaim Groups:           2
00:10:21.630    Number of Recalim Unit Handles:     8
00:10:21.630    Max Placement Identifiers:          128
00:10:21.630    Number of Namespaces Suppprted:     256
00:10:21.630    Reclaim unit Nominal Size:          6000000 bytes
00:10:21.630    Estimated Reclaim Unit Time Limit:  Not Reported
00:10:21.630      RUH Desc #000:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #001:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #002:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #003:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #004:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #005:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #006:          RUH Type: Initially Isolated
00:10:21.630      RUH Desc #007:          RUH Type: Initially Isolated
00:10:21.630  
00:10:21.630  FDP reclaim unit handle usage log page
00:10:21.630  ======================================
00:10:21.630  Number of Reclaim Unit Handles:       8
00:10:21.630    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:10:21.630    RUH Usage Desc #001:   RUH Attributes: Unused
00:10:21.630    RUH Usage Desc #002:   RUH Attributes: Unused
00:10:21.630    RUH Usage Desc #003:   RUH Attributes: Unused
00:10:21.630    RUH Usage Desc #004:   RUH Attributes: Unused
00:10:21.630    RUH Usage Desc #005:   RUH Attributes: Unused
00:10:21.630    RUH Usage Desc #006:   RUH Attributes: Unused
00:10:21.630    RUH Usage Desc #007:   RUH Attributes: Unused
00:10:21.630  
00:10:21.630  FDP statistics log page
00:10:21.630  =======================
00:10:21.630  Host bytes with metadata written:  414687232
00:10:21.630  Medi[2024-11-20 14:21:00.493162] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64603 terminated unexpected
00:10:21.630  a bytes with metadata written: 414732288
00:10:21.630  Media bytes erased:                0
00:10:21.630  
00:10:21.630  FDP events log page
00:10:21.630  ===================
00:10:21.630  Number of FDP events:              0
00:10:21.630  
00:10:21.630  NVM Specific Namespace Data
00:10:21.630  ===========================
00:10:21.630  Logical Block Storage Tag Mask:               0
00:10:21.630  Protection Information Capabilities:
00:10:21.630    16b Guard Protection Information Storage Tag Support:  No
00:10:21.630    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:21.630    Storage Tag Check Read Support:                        No
00:10:21.630  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.630  =====================================================
00:10:21.630  NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:21.630  =====================================================
00:10:21.630  Controller Capabilities/Features
00:10:21.630  ================================
00:10:21.630  Vendor ID:                             1b36
00:10:21.630  Subsystem Vendor ID:                   1af4
00:10:21.630  Serial Number:                         12342
00:10:21.630  Model Number:                          QEMU NVMe Ctrl
00:10:21.630  Firmware Version:                      8.0.0
00:10:21.630  Recommended Arb Burst:                 6
00:10:21.630  IEEE OUI Identifier:                   00 54 52
00:10:21.630  Multi-path I/O
00:10:21.630    May have multiple subsystem ports:   No
00:10:21.630    May have multiple controllers:       No
00:10:21.630    Associated with SR-IOV VF:           No
00:10:21.630  Max Data Transfer Size:                524288
00:10:21.630  Max Number of Namespaces:              256
00:10:21.630  Max Number of I/O Queues:              64
00:10:21.630  NVMe Specification Version (VS):       1.4
00:10:21.630  NVMe Specification Version (Identify): 1.4
00:10:21.630  Maximum Queue Entries:                 2048
00:10:21.630  Contiguous Queues Required:            Yes
00:10:21.630  Arbitration Mechanisms Supported
00:10:21.630    Weighted Round Robin:                Not Supported
00:10:21.630    Vendor Specific:                     Not Supported
00:10:21.630  Reset Timeout:                         7500 ms
00:10:21.630  Doorbell Stride:                       4 bytes
00:10:21.630  NVM Subsystem Reset:                   Not Supported
00:10:21.630  Command Sets Supported
00:10:21.630    NVM Command Set:                     Supported
00:10:21.630  Boot Partition:                        Not Supported
00:10:21.630  Memory Page Size Minimum:              4096 bytes
00:10:21.630  Memory Page Size Maximum:              65536 bytes
00:10:21.630  Persistent Memory Region:              Not Supported
00:10:21.630  Optional Asynchronous Events Supported
00:10:21.630    Namespace Attribute Notices:         Supported
00:10:21.630    Firmware Activation Notices:         Not Supported
00:10:21.630    ANA Change Notices:                  Not Supported
00:10:21.630    PLE Aggregate Log Change Notices:    Not Supported
00:10:21.630    LBA Status Info Alert Notices:       Not Supported
00:10:21.630    EGE Aggregate Log Change Notices:    Not Supported
00:10:21.630    Normal NVM Subsystem Shutdown event: Not Supported
00:10:21.630    Zone Descriptor Change Notices:      Not Supported
00:10:21.630    Discovery Log Change Notices:        Not Supported
00:10:21.630  Controller Attributes
00:10:21.630    128-bit Host Identifier:             Not Supported
00:10:21.630    Non-Operational Permissive Mode:     Not Supported
00:10:21.630    NVM Sets:                            Not Supported
00:10:21.630    Read Recovery Levels:                Not Supported
00:10:21.630    Endurance Groups:                    Not Supported
00:10:21.630    Predictable Latency Mode:            Not Supported
00:10:21.630    Traffic Based Keep ALive:            Not Supported
00:10:21.630    Namespace Granularity:               Not Supported
00:10:21.630    SQ Associations:                     Not Supported
00:10:21.630    UUID List:                           Not Supported
00:10:21.630    Multi-Domain Subsystem:              Not Supported
00:10:21.630    Fixed Capacity Management:           Not Supported
00:10:21.630    Variable Capacity Management:        Not Supported
00:10:21.630    Delete Endurance Group:              Not Supported
00:10:21.630    Delete NVM Set:                      Not Supported
00:10:21.630    Extended LBA Formats Supported:      Supported
00:10:21.630    Flexible Data Placement Supported:   Not Supported
00:10:21.630  
00:10:21.630  Controller Memory Buffer Support
00:10:21.630  ================================
00:10:21.630  Supported:                             No
00:10:21.630  
00:10:21.630  Persistent Memory Region Support
00:10:21.630  ================================
00:10:21.630  Supported:                             No
00:10:21.630  
00:10:21.630  Admin Command Set Attributes
00:10:21.630  ============================
00:10:21.630  Security Send/Receive:                 Not Supported
00:10:21.630  Format NVM:                            Supported
00:10:21.630  Firmware Activate/Download:            Not Supported
00:10:21.630  Namespace Management:                  Supported
00:10:21.630  Device Self-Test:                      Not Supported
00:10:21.630  Directives:                            Supported
00:10:21.630  NVMe-MI:                               Not Supported
00:10:21.630  Virtualization Management:             Not Supported
00:10:21.630  Doorbell Buffer Config:                Supported
00:10:21.630  Get LBA Status Capability:             Not Supported
00:10:21.630  Command & Feature Lockdown Capability: Not Supported
00:10:21.630  Abort Command Limit:                   4
00:10:21.630  Async Event Request Limit:             4
00:10:21.630  Number of Firmware Slots:              N/A
00:10:21.630  Firmware Slot 1 Read-Only:             N/A
00:10:21.630  Firmware Activation Without Reset:     N/A
00:10:21.630  Multiple Update Detection Support:     N/A
00:10:21.630  Firmware Update Granularity:           No Information Provided
00:10:21.630  Per-Namespace SMART Log:               Yes
00:10:21.630  Asymmetric Namespace Access Log Page:  Not Supported
00:10:21.630  Subsystem NQN:                         nqn.2019-08.org.qemu:12342
00:10:21.630  Command Effects Log Page:              Supported
00:10:21.630  Get Log Page Extended Data:            Supported
00:10:21.630  Telemetry Log Pages:                   Not Supported
00:10:21.630  Persistent Event Log Pages:            Not Supported
00:10:21.630  Supported Log Pages Log Page:          May Support
00:10:21.630  Commands Supported & Effects Log Page: Not Supported
00:10:21.630  Feature Identifiers & Effects Log Page:May Support
00:10:21.630  NVMe-MI Commands & Effects Log Page:   May Support
00:10:21.630  Data Area 4 for Telemetry Log:         Not Supported
00:10:21.630  Error Log Page Entries Supported:      1
00:10:21.630  Keep Alive:                            Not Supported
00:10:21.630  
00:10:21.630  NVM Command Set Attributes
00:10:21.630  ==========================
00:10:21.630  Submission Queue Entry Size
00:10:21.630    Max:                       64
00:10:21.630    Min:                       64
00:10:21.630  Completion Queue Entry Size
00:10:21.630    Max:                       16
00:10:21.630    Min:                       16
00:10:21.630  Number of Namespaces:        256
00:10:21.630  Compare Command:             Supported
00:10:21.630  Write Uncorrectable Command: Not Supported
00:10:21.630  Dataset Management Command:  Supported
00:10:21.630  Write Zeroes Command:        Supported
00:10:21.630  Set Features Save Field:     Supported
00:10:21.630  Reservations:                Not Supported
00:10:21.630  Timestamp:                   Supported
00:10:21.630  Copy:                        Supported
00:10:21.630  Volatile Write Cache:        Present
00:10:21.630  Atomic Write Unit (Normal):  1
00:10:21.630  Atomic Write Unit (PFail):   1
00:10:21.631  Atomic Compare & Write Unit: 1
00:10:21.631  Fused Compare & Write:       Not Supported
00:10:21.631  Scatter-Gather List
00:10:21.631    SGL Command Set:           Supported
00:10:21.631    SGL Keyed:                 Not Supported
00:10:21.631    SGL Bit Bucket Descriptor: Not Supported
00:10:21.631    SGL Metadata Pointer:      Not Supported
00:10:21.631    Oversized SGL:             Not Supported
00:10:21.631    SGL Metadata Address:      Not Supported
00:10:21.631    SGL Offset:                Not Supported
00:10:21.631    Transport SGL Data Block:  Not Supported
00:10:21.631  Replay Protected Memory Block:  Not Supported
00:10:21.631  
00:10:21.631  Firmware Slot Information
00:10:21.631  =========================
00:10:21.631  Active slot:                 1
00:10:21.631  Slot 1 Firmware Revision:    1.0
00:10:21.631  
00:10:21.631  
00:10:21.631  Commands Supported and Effects
00:10:21.631  ==============================
00:10:21.631  Admin Commands
00:10:21.631  --------------
00:10:21.631     Delete I/O Submission Queue (00h): Supported 
00:10:21.631     Create I/O Submission Queue (01h): Supported 
00:10:21.631                    Get Log Page (02h): Supported 
00:10:21.631     Delete I/O Completion Queue (04h): Supported 
00:10:21.631     Create I/O Completion Queue (05h): Supported 
00:10:21.631                        Identify (06h): Supported 
00:10:21.631                           Abort (08h): Supported 
00:10:21.631                    Set Features (09h): Supported 
00:10:21.631                    Get Features (0Ah): Supported 
00:10:21.631      Asynchronous Event Request (0Ch): Supported 
00:10:21.631            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:21.631                  Directive Send (19h): Supported 
00:10:21.631               Directive Receive (1Ah): Supported 
00:10:21.631       Virtualization Management (1Ch): Supported 
00:10:21.631          Doorbell Buffer Config (7Ch): Supported 
00:10:21.631                      Format NVM (80h): Supported LBA-Change 
00:10:21.631  I/O Commands
00:10:21.631  ------------
00:10:21.631                           Flush (00h): Supported LBA-Change 
00:10:21.631                           Write (01h): Supported LBA-Change 
00:10:21.631                            Read (02h): Supported 
00:10:21.631                         Compare (05h): Supported 
00:10:21.631                    Write Zeroes (08h): Supported LBA-Change 
00:10:21.631              Dataset Management (09h): Supported LBA-Change 
00:10:21.631                         Unknown (0Ch): Supported 
00:10:21.631                         Unknown (12h): Supported 
00:10:21.631                            Copy (19h): Supported LBA-Change 
00:10:21.631                         Unknown (1Dh): Supported LBA-Change 
00:10:21.631  
00:10:21.631  Error Log
00:10:21.631  =========
00:10:21.631  
00:10:21.631  Arbitration
00:10:21.631  ===========
00:10:21.631  Arbitration Burst:           no limit
00:10:21.631  
00:10:21.631  Power Management
00:10:21.631  ================
00:10:21.631  Number of Power States:          1
00:10:21.631  Current Power State:             Power State #0
00:10:21.631  Power State #0:
00:10:21.631    Max Power:                     25.00 W
00:10:21.631    Non-Operational State:         Operational
00:10:21.631    Entry Latency:                 16 microseconds
00:10:21.631    Exit Latency:                  4 microseconds
00:10:21.631    Relative Read Throughput:      0
00:10:21.631    Relative Read Latency:         0
00:10:21.631    Relative Write Throughput:     0
00:10:21.631    Relative Write Latency:        0
00:10:21.631    Idle Power:                     Not Reported
00:10:21.631    Active Power:                   Not Reported
00:10:21.631  Non-Operational Permissive Mode: Not Supported
00:10:21.631  
00:10:21.631  Health Information
00:10:21.631  ==================
00:10:21.631  Critical Warnings:
00:10:21.631    Available Spare Space:     OK
00:10:21.631    Temperature:               OK
00:10:21.631    Device Reliability:        OK
00:10:21.631    Read Only:                 No
00:10:21.631    Volatile Memory Backup:    OK
00:10:21.631  Current Temperature:         323 Kelvin (50 Celsius)
00:10:21.631  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:21.631  Available Spare:             0%
00:10:21.631  Available Spare Threshold:   0%
00:10:21.631  Life Percentage Used:        0%
00:10:21.631  Data Units Read:             2123
00:10:21.631  Data Units Written:          1910
00:10:21.631  Host Read Commands:          104247
00:10:21.631  Host Write Commands:         102516
00:10:21.631  Controller Busy Time:        0 minutes
00:10:21.631  Power Cycles:                0
00:10:21.631  Power On Hours:              0 hours
00:10:21.631  Unsafe Shutdowns:            0
00:10:21.631  Unrecoverable Media Errors:  0
00:10:21.631  Lifetime Error Log Entries:  0
00:10:21.631  Warning Temperature Time:    0 minutes
00:10:21.631  Critical Temperature Time:   0 minutes
00:10:21.631  
00:10:21.631  Number of Queues
00:10:21.631  ================
00:10:21.631  Number of I/O Submission Queues:      64
00:10:21.631  Number of I/O Completion Queues:      64
00:10:21.631  
00:10:21.631  ZNS Specific Controller Data
00:10:21.631  ============================
00:10:21.631  Zone Append Size Limit:      0
00:10:21.631  
00:10:21.631  
00:10:21.631  Active Namespaces
00:10:21.631  =================
00:10:21.631  Namespace ID:1
00:10:21.631  Error Recovery Timeout:                Unlimited
00:10:21.631  Command Set Identifier:                NVM (00h)
00:10:21.631  Deallocate:                            Supported
00:10:21.631  Deallocated/Unwritten Error:           Supported
00:10:21.631  Deallocated Read Value:                All 0x00
00:10:21.631  Deallocate in Write Zeroes:            Not Supported
00:10:21.631  Deallocated Guard Field:               0xFFFF
00:10:21.631  Flush:                                 Supported
00:10:21.631  Reservation:                           Not Supported
00:10:21.631  Namespace Sharing Capabilities:        Private
00:10:21.631  Size (in LBAs):                        1048576 (4GiB)
00:10:21.631  Capacity (in LBAs):                    1048576 (4GiB)
00:10:21.631  Utilization (in LBAs):                 1048576 (4GiB)
00:10:21.631  Thin Provisioning:                     Not Supported
00:10:21.631  Per-NS Atomic Units:                   No
00:10:21.631  Maximum Single Source Range Length:    128
00:10:21.631  Maximum Copy Length:                   128
00:10:21.631  Maximum Source Range Count:            128
00:10:21.631  NGUID/EUI64 Never Reused:              No
00:10:21.631  Namespace Write Protected:             No
00:10:21.631  Number of LBA Formats:                 8
00:10:21.631  Current LBA Format:                    LBA Format #04
00:10:21.631  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:21.631  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:21.631  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:21.631  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:21.631  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:21.631  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:21.631  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:21.631  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:21.631  
00:10:21.631  NVM Specific Namespace Data
00:10:21.631  ===========================
00:10:21.631  Logical Block Storage Tag Mask:               0
00:10:21.631  Protection Information Capabilities:
00:10:21.631    16b Guard Protection Information Storage Tag Support:  No
00:10:21.631    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:21.631    Storage Tag Check Read Support:                        No
00:10:21.631  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Namespace ID:2
00:10:21.631  Error Recovery Timeout:                Unlimited
00:10:21.631  Command Set Identifier:                NVM (00h)
00:10:21.631  Deallocate:                            Supported
00:10:21.631  Deallocated/Unwritten Error:           Supported
00:10:21.631  Deallocated Read Value:                All 0x00
00:10:21.631  Deallocate in Write Zeroes:            Not Supported
00:10:21.631  Deallocated Guard Field:               0xFFFF
00:10:21.631  Flush:                                 Supported
00:10:21.631  Reservation:                           Not Supported
00:10:21.631  Namespace Sharing Capabilities:        Private
00:10:21.631  Size (in LBAs):                        1048576 (4GiB)
00:10:21.631  Capacity (in LBAs):                    1048576 (4GiB)
00:10:21.631  Utilization (in LBAs):                 1048576 (4GiB)
00:10:21.631  Thin Provisioning:                     Not Supported
00:10:21.631  Per-NS Atomic Units:                   No
00:10:21.631  Maximum Single Source Range Length:    128
00:10:21.631  Maximum Copy Length:                   128
00:10:21.631  Maximum Source Range Count:            128
00:10:21.631  NGUID/EUI64 Never Reused:              No
00:10:21.631  Namespace Write Protected:             No
00:10:21.631  Number of LBA Formats:                 8
00:10:21.631  Current LBA Format:                    LBA Format #04
00:10:21.631  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:21.631  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:21.631  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:21.631  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:21.631  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:21.631  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:21.631  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:21.631  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:21.631  
00:10:21.631  NVM Specific Namespace Data
00:10:21.631  ===========================
00:10:21.631  Logical Block Storage Tag Mask:               0
00:10:21.631  Protection Information Capabilities:
00:10:21.631    16b Guard Protection Information Storage Tag Support:  No
00:10:21.631    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:21.631    Storage Tag Check Read Support:                        No
00:10:21.631  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Namespace ID:3
00:10:21.631  Error Recovery Timeout:                Unlimited
00:10:21.631  Command Set Identifier:                NVM (00h)
00:10:21.631  Deallocate:                            Supported
00:10:21.631  Deallocated/Unwritten Error:           Supported
00:10:21.631  Deallocated Read Value:                All 0x00
00:10:21.631  Deallocate in Write Zeroes:            Not Supported
00:10:21.631  Deallocated Guard Field:               0xFFFF
00:10:21.631  Flush:                                 Supported
00:10:21.631  Reservation:                           Not Supported
00:10:21.631  Namespace Sharing Capabilities:        Private
00:10:21.631  Size (in LBAs):                        1048576 (4GiB)
00:10:21.631  Capacity (in LBAs):                    1048576 (4GiB)
00:10:21.631  Utilization (in LBAs):                 1048576 (4GiB)
00:10:21.631  Thin Provisioning:                     Not Supported
00:10:21.631  Per-NS Atomic Units:                   No
00:10:21.631  Maximum Single Source Range Length:    128
00:10:21.631  Maximum Copy Length:                   128
00:10:21.631  Maximum Source Range Count:            128
00:10:21.631  NGUID/EUI64 Never Reused:              No
00:10:21.631  Namespace Write Protected:             No
00:10:21.631  Number of LBA Formats:                 8
00:10:21.631  Current LBA Format:                    LBA Format #04
00:10:21.631  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:21.631  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:21.631  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:21.631  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:21.631  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:21.631  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:21.631  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:21.631  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:21.631  
00:10:21.631  NVM Specific Namespace Data
00:10:21.631  ===========================
00:10:21.631  Logical Block Storage Tag Mask:               0
00:10:21.631  Protection Information Capabilities:
00:10:21.631    16b Guard Protection Information Storage Tag Support:  No
00:10:21.631    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:21.631    Storage Tag Check Read Support:                        No
00:10:21.631  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:21.631   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:21.631   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:10:22.198  =====================================================
00:10:22.198  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:22.198  =====================================================
00:10:22.198  Controller Capabilities/Features
00:10:22.198  ================================
00:10:22.198  Vendor ID:                             1b36
00:10:22.198  Subsystem Vendor ID:                   1af4
00:10:22.198  Serial Number:                         12340
00:10:22.198  Model Number:                          QEMU NVMe Ctrl
00:10:22.198  Firmware Version:                      8.0.0
00:10:22.198  Recommended Arb Burst:                 6
00:10:22.198  IEEE OUI Identifier:                   00 54 52
00:10:22.198  Multi-path I/O
00:10:22.198    May have multiple subsystem ports:   No
00:10:22.198    May have multiple controllers:       No
00:10:22.198    Associated with SR-IOV VF:           No
00:10:22.198  Max Data Transfer Size:                524288
00:10:22.198  Max Number of Namespaces:              256
00:10:22.198  Max Number of I/O Queues:              64
00:10:22.198  NVMe Specification Version (VS):       1.4
00:10:22.198  NVMe Specification Version (Identify): 1.4
00:10:22.198  Maximum Queue Entries:                 2048
00:10:22.198  Contiguous Queues Required:            Yes
00:10:22.198  Arbitration Mechanisms Supported
00:10:22.198    Weighted Round Robin:                Not Supported
00:10:22.198    Vendor Specific:                     Not Supported
00:10:22.198  Reset Timeout:                         7500 ms
00:10:22.198  Doorbell Stride:                       4 bytes
00:10:22.198  NVM Subsystem Reset:                   Not Supported
00:10:22.198  Command Sets Supported
00:10:22.198    NVM Command Set:                     Supported
00:10:22.198  Boot Partition:                        Not Supported
00:10:22.198  Memory Page Size Minimum:              4096 bytes
00:10:22.198  Memory Page Size Maximum:              65536 bytes
00:10:22.198  Persistent Memory Region:              Not Supported
00:10:22.198  Optional Asynchronous Events Supported
00:10:22.198    Namespace Attribute Notices:         Supported
00:10:22.198    Firmware Activation Notices:         Not Supported
00:10:22.198    ANA Change Notices:                  Not Supported
00:10:22.198    PLE Aggregate Log Change Notices:    Not Supported
00:10:22.198    LBA Status Info Alert Notices:       Not Supported
00:10:22.198    EGE Aggregate Log Change Notices:    Not Supported
00:10:22.198    Normal NVM Subsystem Shutdown event: Not Supported
00:10:22.198    Zone Descriptor Change Notices:      Not Supported
00:10:22.198    Discovery Log Change Notices:        Not Supported
00:10:22.198  Controller Attributes
00:10:22.198    128-bit Host Identifier:             Not Supported
00:10:22.198    Non-Operational Permissive Mode:     Not Supported
00:10:22.198    NVM Sets:                            Not Supported
00:10:22.198    Read Recovery Levels:                Not Supported
00:10:22.198    Endurance Groups:                    Not Supported
00:10:22.198    Predictable Latency Mode:            Not Supported
00:10:22.198    Traffic Based Keep ALive:            Not Supported
00:10:22.198    Namespace Granularity:               Not Supported
00:10:22.198    SQ Associations:                     Not Supported
00:10:22.198    UUID List:                           Not Supported
00:10:22.198    Multi-Domain Subsystem:              Not Supported
00:10:22.198    Fixed Capacity Management:           Not Supported
00:10:22.198    Variable Capacity Management:        Not Supported
00:10:22.198    Delete Endurance Group:              Not Supported
00:10:22.198    Delete NVM Set:                      Not Supported
00:10:22.198    Extended LBA Formats Supported:      Supported
00:10:22.198    Flexible Data Placement Supported:   Not Supported
00:10:22.198  
00:10:22.198  Controller Memory Buffer Support
00:10:22.198  ================================
00:10:22.198  Supported:                             No
00:10:22.198  
00:10:22.198  Persistent Memory Region Support
00:10:22.198  ================================
00:10:22.198  Supported:                             No
00:10:22.198  
00:10:22.198  Admin Command Set Attributes
00:10:22.198  ============================
00:10:22.198  Security Send/Receive:                 Not Supported
00:10:22.198  Format NVM:                            Supported
00:10:22.198  Firmware Activate/Download:            Not Supported
00:10:22.198  Namespace Management:                  Supported
00:10:22.198  Device Self-Test:                      Not Supported
00:10:22.198  Directives:                            Supported
00:10:22.198  NVMe-MI:                               Not Supported
00:10:22.198  Virtualization Management:             Not Supported
00:10:22.198  Doorbell Buffer Config:                Supported
00:10:22.198  Get LBA Status Capability:             Not Supported
00:10:22.198  Command & Feature Lockdown Capability: Not Supported
00:10:22.198  Abort Command Limit:                   4
00:10:22.198  Async Event Request Limit:             4
00:10:22.198  Number of Firmware Slots:              N/A
00:10:22.198  Firmware Slot 1 Read-Only:             N/A
00:10:22.198  Firmware Activation Without Reset:     N/A
00:10:22.198  Multiple Update Detection Support:     N/A
00:10:22.198  Firmware Update Granularity:           No Information Provided
00:10:22.198  Per-Namespace SMART Log:               Yes
00:10:22.198  Asymmetric Namespace Access Log Page:  Not Supported
00:10:22.198  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:10:22.198  Command Effects Log Page:              Supported
00:10:22.198  Get Log Page Extended Data:            Supported
00:10:22.198  Telemetry Log Pages:                   Not Supported
00:10:22.198  Persistent Event Log Pages:            Not Supported
00:10:22.198  Supported Log Pages Log Page:          May Support
00:10:22.198  Commands Supported & Effects Log Page: Not Supported
00:10:22.198  Feature Identifiers & Effects Log Page:May Support
00:10:22.198  NVMe-MI Commands & Effects Log Page:   May Support
00:10:22.198  Data Area 4 for Telemetry Log:         Not Supported
00:10:22.198  Error Log Page Entries Supported:      1
00:10:22.198  Keep Alive:                            Not Supported
00:10:22.198  
00:10:22.198  NVM Command Set Attributes
00:10:22.198  ==========================
00:10:22.198  Submission Queue Entry Size
00:10:22.198    Max:                       64
00:10:22.198    Min:                       64
00:10:22.198  Completion Queue Entry Size
00:10:22.198    Max:                       16
00:10:22.198    Min:                       16
00:10:22.198  Number of Namespaces:        256
00:10:22.198  Compare Command:             Supported
00:10:22.198  Write Uncorrectable Command: Not Supported
00:10:22.198  Dataset Management Command:  Supported
00:10:22.198  Write Zeroes Command:        Supported
00:10:22.198  Set Features Save Field:     Supported
00:10:22.198  Reservations:                Not Supported
00:10:22.198  Timestamp:                   Supported
00:10:22.198  Copy:                        Supported
00:10:22.198  Volatile Write Cache:        Present
00:10:22.198  Atomic Write Unit (Normal):  1
00:10:22.198  Atomic Write Unit (PFail):   1
00:10:22.198  Atomic Compare & Write Unit: 1
00:10:22.198  Fused Compare & Write:       Not Supported
00:10:22.198  Scatter-Gather List
00:10:22.198    SGL Command Set:           Supported
00:10:22.198    SGL Keyed:                 Not Supported
00:10:22.198    SGL Bit Bucket Descriptor: Not Supported
00:10:22.198    SGL Metadata Pointer:      Not Supported
00:10:22.198    Oversized SGL:             Not Supported
00:10:22.198    SGL Metadata Address:      Not Supported
00:10:22.198    SGL Offset:                Not Supported
00:10:22.198    Transport SGL Data Block:  Not Supported
00:10:22.198  Replay Protected Memory Block:  Not Supported
00:10:22.198  
00:10:22.198  Firmware Slot Information
00:10:22.199  =========================
00:10:22.199  Active slot:                 1
00:10:22.199  Slot 1 Firmware Revision:    1.0
00:10:22.199  
00:10:22.199  
00:10:22.199  Commands Supported and Effects
00:10:22.199  ==============================
00:10:22.199  Admin Commands
00:10:22.199  --------------
00:10:22.199     Delete I/O Submission Queue (00h): Supported 
00:10:22.199     Create I/O Submission Queue (01h): Supported 
00:10:22.199                    Get Log Page (02h): Supported 
00:10:22.199     Delete I/O Completion Queue (04h): Supported 
00:10:22.199     Create I/O Completion Queue (05h): Supported 
00:10:22.199                        Identify (06h): Supported 
00:10:22.199                           Abort (08h): Supported 
00:10:22.199                    Set Features (09h): Supported 
00:10:22.199                    Get Features (0Ah): Supported 
00:10:22.199      Asynchronous Event Request (0Ch): Supported 
00:10:22.199            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:22.199                  Directive Send (19h): Supported 
00:10:22.199               Directive Receive (1Ah): Supported 
00:10:22.199       Virtualization Management (1Ch): Supported 
00:10:22.199          Doorbell Buffer Config (7Ch): Supported 
00:10:22.199                      Format NVM (80h): Supported LBA-Change 
00:10:22.199  I/O Commands
00:10:22.199  ------------
00:10:22.199                           Flush (00h): Supported LBA-Change 
00:10:22.199                           Write (01h): Supported LBA-Change 
00:10:22.199                            Read (02h): Supported 
00:10:22.199                         Compare (05h): Supported 
00:10:22.199                    Write Zeroes (08h): Supported LBA-Change 
00:10:22.199              Dataset Management (09h): Supported LBA-Change 
00:10:22.199                         Unknown (0Ch): Supported 
00:10:22.199                         Unknown (12h): Supported 
00:10:22.199                            Copy (19h): Supported LBA-Change 
00:10:22.199                         Unknown (1Dh): Supported LBA-Change 
00:10:22.199  
00:10:22.199  Error Log
00:10:22.199  =========
00:10:22.199  
00:10:22.199  Arbitration
00:10:22.199  ===========
00:10:22.199  Arbitration Burst:           no limit
00:10:22.199  
00:10:22.199  Power Management
00:10:22.199  ================
00:10:22.199  Number of Power States:          1
00:10:22.199  Current Power State:             Power State #0
00:10:22.199  Power State #0:
00:10:22.199    Max Power:                     25.00 W
00:10:22.199    Non-Operational State:         Operational
00:10:22.199    Entry Latency:                 16 microseconds
00:10:22.199    Exit Latency:                  4 microseconds
00:10:22.199    Relative Read Throughput:      0
00:10:22.199    Relative Read Latency:         0
00:10:22.199    Relative Write Throughput:     0
00:10:22.199    Relative Write Latency:        0
00:10:22.199    Idle Power:                     Not Reported
00:10:22.199    Active Power:                   Not Reported
00:10:22.199  Non-Operational Permissive Mode: Not Supported
00:10:22.199  
00:10:22.199  Health Information
00:10:22.199  ==================
00:10:22.199  Critical Warnings:
00:10:22.199    Available Spare Space:     OK
00:10:22.199    Temperature:               OK
00:10:22.199    Device Reliability:        OK
00:10:22.199    Read Only:                 No
00:10:22.199    Volatile Memory Backup:    OK
00:10:22.199  Current Temperature:         323 Kelvin (50 Celsius)
00:10:22.199  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:22.199  Available Spare:             0%
00:10:22.199  Available Spare Threshold:   0%
00:10:22.199  Life Percentage Used:        0%
00:10:22.199  Data Units Read:             669
00:10:22.199  Data Units Written:          597
00:10:22.199  Host Read Commands:          34228
00:10:22.199  Host Write Commands:         34014
00:10:22.199  Controller Busy Time:        0 minutes
00:10:22.199  Power Cycles:                0
00:10:22.199  Power On Hours:              0 hours
00:10:22.199  Unsafe Shutdowns:            0
00:10:22.199  Unrecoverable Media Errors:  0
00:10:22.199  Lifetime Error Log Entries:  0
00:10:22.199  Warning Temperature Time:    0 minutes
00:10:22.199  Critical Temperature Time:   0 minutes
00:10:22.199  
00:10:22.199  Number of Queues
00:10:22.199  ================
00:10:22.199  Number of I/O Submission Queues:      64
00:10:22.199  Number of I/O Completion Queues:      64
00:10:22.199  
00:10:22.199  ZNS Specific Controller Data
00:10:22.199  ============================
00:10:22.199  Zone Append Size Limit:      0
00:10:22.199  
00:10:22.199  
00:10:22.199  Active Namespaces
00:10:22.199  =================
00:10:22.199  Namespace ID:1
00:10:22.199  Error Recovery Timeout:                Unlimited
00:10:22.199  Command Set Identifier:                NVM (00h)
00:10:22.199  Deallocate:                            Supported
00:10:22.199  Deallocated/Unwritten Error:           Supported
00:10:22.199  Deallocated Read Value:                All 0x00
00:10:22.199  Deallocate in Write Zeroes:            Not Supported
00:10:22.199  Deallocated Guard Field:               0xFFFF
00:10:22.199  Flush:                                 Supported
00:10:22.199  Reservation:                           Not Supported
00:10:22.199  Metadata Transferred as:               Separate Metadata Buffer
00:10:22.199  Namespace Sharing Capabilities:        Private
00:10:22.199  Size (in LBAs):                        1548666 (5GiB)
00:10:22.199  Capacity (in LBAs):                    1548666 (5GiB)
00:10:22.199  Utilization (in LBAs):                 1548666 (5GiB)
00:10:22.199  Thin Provisioning:                     Not Supported
00:10:22.199  Per-NS Atomic Units:                   No
00:10:22.199  Maximum Single Source Range Length:    128
00:10:22.199  Maximum Copy Length:                   128
00:10:22.199  Maximum Source Range Count:            128
00:10:22.199  NGUID/EUI64 Never Reused:              No
00:10:22.199  Namespace Write Protected:             No
00:10:22.199  Number of LBA Formats:                 8
00:10:22.199  Current LBA Format:                    LBA Format #07
00:10:22.199  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:22.199  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:22.199  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:22.199  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:22.199  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:22.199  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:22.199  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:22.199  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:22.199  
00:10:22.199  NVM Specific Namespace Data
00:10:22.199  ===========================
00:10:22.199  Logical Block Storage Tag Mask:               0
00:10:22.199  Protection Information Capabilities:
00:10:22.199    16b Guard Protection Information Storage Tag Support:  No
00:10:22.199    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:22.199    Storage Tag Check Read Support:                        No
00:10:22.199  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.199   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:22.199   14:21:00 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0
00:10:22.458  =====================================================
00:10:22.458  NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:22.458  =====================================================
00:10:22.458  Controller Capabilities/Features
00:10:22.458  ================================
00:10:22.458  Vendor ID:                             1b36
00:10:22.458  Subsystem Vendor ID:                   1af4
00:10:22.458  Serial Number:                         12341
00:10:22.458  Model Number:                          QEMU NVMe Ctrl
00:10:22.458  Firmware Version:                      8.0.0
00:10:22.458  Recommended Arb Burst:                 6
00:10:22.458  IEEE OUI Identifier:                   00 54 52
00:10:22.459  Multi-path I/O
00:10:22.459    May have multiple subsystem ports:   No
00:10:22.459    May have multiple controllers:       No
00:10:22.459    Associated with SR-IOV VF:           No
00:10:22.459  Max Data Transfer Size:                524288
00:10:22.459  Max Number of Namespaces:              256
00:10:22.459  Max Number of I/O Queues:              64
00:10:22.459  NVMe Specification Version (VS):       1.4
00:10:22.459  NVMe Specification Version (Identify): 1.4
00:10:22.459  Maximum Queue Entries:                 2048
00:10:22.459  Contiguous Queues Required:            Yes
00:10:22.459  Arbitration Mechanisms Supported
00:10:22.459    Weighted Round Robin:                Not Supported
00:10:22.459    Vendor Specific:                     Not Supported
00:10:22.459  Reset Timeout:                         7500 ms
00:10:22.459  Doorbell Stride:                       4 bytes
00:10:22.459  NVM Subsystem Reset:                   Not Supported
00:10:22.459  Command Sets Supported
00:10:22.459    NVM Command Set:                     Supported
00:10:22.459  Boot Partition:                        Not Supported
00:10:22.459  Memory Page Size Minimum:              4096 bytes
00:10:22.459  Memory Page Size Maximum:              65536 bytes
00:10:22.459  Persistent Memory Region:              Not Supported
00:10:22.459  Optional Asynchronous Events Supported
00:10:22.459    Namespace Attribute Notices:         Supported
00:10:22.459    Firmware Activation Notices:         Not Supported
00:10:22.459    ANA Change Notices:                  Not Supported
00:10:22.459    PLE Aggregate Log Change Notices:    Not Supported
00:10:22.459    LBA Status Info Alert Notices:       Not Supported
00:10:22.459    EGE Aggregate Log Change Notices:    Not Supported
00:10:22.459    Normal NVM Subsystem Shutdown event: Not Supported
00:10:22.459    Zone Descriptor Change Notices:      Not Supported
00:10:22.459    Discovery Log Change Notices:        Not Supported
00:10:22.459  Controller Attributes
00:10:22.459    128-bit Host Identifier:             Not Supported
00:10:22.459    Non-Operational Permissive Mode:     Not Supported
00:10:22.459    NVM Sets:                            Not Supported
00:10:22.459    Read Recovery Levels:                Not Supported
00:10:22.459    Endurance Groups:                    Not Supported
00:10:22.459    Predictable Latency Mode:            Not Supported
00:10:22.459    Traffic Based Keep ALive:            Not Supported
00:10:22.459    Namespace Granularity:               Not Supported
00:10:22.459    SQ Associations:                     Not Supported
00:10:22.459    UUID List:                           Not Supported
00:10:22.459    Multi-Domain Subsystem:              Not Supported
00:10:22.459    Fixed Capacity Management:           Not Supported
00:10:22.459    Variable Capacity Management:        Not Supported
00:10:22.459    Delete Endurance Group:              Not Supported
00:10:22.459    Delete NVM Set:                      Not Supported
00:10:22.459    Extended LBA Formats Supported:      Supported
00:10:22.459    Flexible Data Placement Supported:   Not Supported
00:10:22.459  
00:10:22.459  Controller Memory Buffer Support
00:10:22.459  ================================
00:10:22.459  Supported:                             No
00:10:22.459  
00:10:22.459  Persistent Memory Region Support
00:10:22.459  ================================
00:10:22.459  Supported:                             No
00:10:22.459  
00:10:22.459  Admin Command Set Attributes
00:10:22.459  ============================
00:10:22.459  Security Send/Receive:                 Not Supported
00:10:22.459  Format NVM:                            Supported
00:10:22.459  Firmware Activate/Download:            Not Supported
00:10:22.459  Namespace Management:                  Supported
00:10:22.459  Device Self-Test:                      Not Supported
00:10:22.459  Directives:                            Supported
00:10:22.459  NVMe-MI:                               Not Supported
00:10:22.459  Virtualization Management:             Not Supported
00:10:22.459  Doorbell Buffer Config:                Supported
00:10:22.459  Get LBA Status Capability:             Not Supported
00:10:22.459  Command & Feature Lockdown Capability: Not Supported
00:10:22.459  Abort Command Limit:                   4
00:10:22.459  Async Event Request Limit:             4
00:10:22.459  Number of Firmware Slots:              N/A
00:10:22.459  Firmware Slot 1 Read-Only:             N/A
00:10:22.459  Firmware Activation Without Reset:     N/A
00:10:22.459  Multiple Update Detection Support:     N/A
00:10:22.459  Firmware Update Granularity:           No Information Provided
00:10:22.459  Per-Namespace SMART Log:               Yes
00:10:22.459  Asymmetric Namespace Access Log Page:  Not Supported
00:10:22.459  Subsystem NQN:                         nqn.2019-08.org.qemu:12341
00:10:22.459  Command Effects Log Page:              Supported
00:10:22.459  Get Log Page Extended Data:            Supported
00:10:22.459  Telemetry Log Pages:                   Not Supported
00:10:22.459  Persistent Event Log Pages:            Not Supported
00:10:22.459  Supported Log Pages Log Page:          May Support
00:10:22.459  Commands Supported & Effects Log Page: Not Supported
00:10:22.459  Feature Identifiers & Effects Log Page:May Support
00:10:22.459  NVMe-MI Commands & Effects Log Page:   May Support
00:10:22.459  Data Area 4 for Telemetry Log:         Not Supported
00:10:22.459  Error Log Page Entries Supported:      1
00:10:22.459  Keep Alive:                            Not Supported
00:10:22.459  
00:10:22.459  NVM Command Set Attributes
00:10:22.459  ==========================
00:10:22.459  Submission Queue Entry Size
00:10:22.459    Max:                       64
00:10:22.459    Min:                       64
00:10:22.459  Completion Queue Entry Size
00:10:22.459    Max:                       16
00:10:22.459    Min:                       16
00:10:22.459  Number of Namespaces:        256
00:10:22.459  Compare Command:             Supported
00:10:22.459  Write Uncorrectable Command: Not Supported
00:10:22.459  Dataset Management Command:  Supported
00:10:22.459  Write Zeroes Command:        Supported
00:10:22.459  Set Features Save Field:     Supported
00:10:22.459  Reservations:                Not Supported
00:10:22.459  Timestamp:                   Supported
00:10:22.459  Copy:                        Supported
00:10:22.459  Volatile Write Cache:        Present
00:10:22.459  Atomic Write Unit (Normal):  1
00:10:22.459  Atomic Write Unit (PFail):   1
00:10:22.459  Atomic Compare & Write Unit: 1
00:10:22.459  Fused Compare & Write:       Not Supported
00:10:22.459  Scatter-Gather List
00:10:22.459    SGL Command Set:           Supported
00:10:22.459    SGL Keyed:                 Not Supported
00:10:22.459    SGL Bit Bucket Descriptor: Not Supported
00:10:22.459    SGL Metadata Pointer:      Not Supported
00:10:22.459    Oversized SGL:             Not Supported
00:10:22.459    SGL Metadata Address:      Not Supported
00:10:22.459    SGL Offset:                Not Supported
00:10:22.459    Transport SGL Data Block:  Not Supported
00:10:22.459  Replay Protected Memory Block:  Not Supported
00:10:22.459  
00:10:22.459  Firmware Slot Information
00:10:22.459  =========================
00:10:22.459  Active slot:                 1
00:10:22.459  Slot 1 Firmware Revision:    1.0
00:10:22.459  
00:10:22.459  
00:10:22.459  Commands Supported and Effects
00:10:22.459  ==============================
00:10:22.459  Admin Commands
00:10:22.459  --------------
00:10:22.459     Delete I/O Submission Queue (00h): Supported 
00:10:22.459     Create I/O Submission Queue (01h): Supported 
00:10:22.459                    Get Log Page (02h): Supported 
00:10:22.459     Delete I/O Completion Queue (04h): Supported 
00:10:22.459     Create I/O Completion Queue (05h): Supported 
00:10:22.459                        Identify (06h): Supported 
00:10:22.459                           Abort (08h): Supported 
00:10:22.459                    Set Features (09h): Supported 
00:10:22.459                    Get Features (0Ah): Supported 
00:10:22.459      Asynchronous Event Request (0Ch): Supported 
00:10:22.459            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:22.459                  Directive Send (19h): Supported 
00:10:22.459               Directive Receive (1Ah): Supported 
00:10:22.459       Virtualization Management (1Ch): Supported 
00:10:22.459          Doorbell Buffer Config (7Ch): Supported 
00:10:22.459                      Format NVM (80h): Supported LBA-Change 
00:10:22.459  I/O Commands
00:10:22.459  ------------
00:10:22.459                           Flush (00h): Supported LBA-Change 
00:10:22.459                           Write (01h): Supported LBA-Change 
00:10:22.459                            Read (02h): Supported 
00:10:22.459                         Compare (05h): Supported 
00:10:22.459                    Write Zeroes (08h): Supported LBA-Change 
00:10:22.459              Dataset Management (09h): Supported LBA-Change 
00:10:22.459                         Unknown (0Ch): Supported 
00:10:22.459                         Unknown (12h): Supported 
00:10:22.459                            Copy (19h): Supported LBA-Change 
00:10:22.459                         Unknown (1Dh): Supported LBA-Change 
00:10:22.459  
00:10:22.459  Error Log
00:10:22.459  =========
00:10:22.459  
00:10:22.459  Arbitration
00:10:22.459  ===========
00:10:22.459  Arbitration Burst:           no limit
00:10:22.459  
00:10:22.459  Power Management
00:10:22.459  ================
00:10:22.459  Number of Power States:          1
00:10:22.459  Current Power State:             Power State #0
00:10:22.459  Power State #0:
00:10:22.459    Max Power:                     25.00 W
00:10:22.459    Non-Operational State:         Operational
00:10:22.459    Entry Latency:                 16 microseconds
00:10:22.459    Exit Latency:                  4 microseconds
00:10:22.459    Relative Read Throughput:      0
00:10:22.459    Relative Read Latency:         0
00:10:22.459    Relative Write Throughput:     0
00:10:22.459    Relative Write Latency:        0
00:10:22.459    Idle Power:                     Not Reported
00:10:22.459    Active Power:                   Not Reported
00:10:22.459  Non-Operational Permissive Mode: Not Supported
00:10:22.459  
00:10:22.459  Health Information
00:10:22.459  ==================
00:10:22.459  Critical Warnings:
00:10:22.459    Available Spare Space:     OK
00:10:22.459    Temperature:               OK
00:10:22.459    Device Reliability:        OK
00:10:22.459    Read Only:                 No
00:10:22.459    Volatile Memory Backup:    OK
00:10:22.459  Current Temperature:         323 Kelvin (50 Celsius)
00:10:22.459  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:22.459  Available Spare:             0%
00:10:22.460  Available Spare Threshold:   0%
00:10:22.460  Life Percentage Used:        0%
00:10:22.460  Data Units Read:             1028
00:10:22.460  Data Units Written:          896
00:10:22.460  Host Read Commands:          50944
00:10:22.460  Host Write Commands:         49729
00:10:22.460  Controller Busy Time:        0 minutes
00:10:22.460  Power Cycles:                0
00:10:22.460  Power On Hours:              0 hours
00:10:22.460  Unsafe Shutdowns:            0
00:10:22.460  Unrecoverable Media Errors:  0
00:10:22.460  Lifetime Error Log Entries:  0
00:10:22.460  Warning Temperature Time:    0 minutes
00:10:22.460  Critical Temperature Time:   0 minutes
00:10:22.460  
00:10:22.460  Number of Queues
00:10:22.460  ================
00:10:22.460  Number of I/O Submission Queues:      64
00:10:22.460  Number of I/O Completion Queues:      64
00:10:22.460  
00:10:22.460  ZNS Specific Controller Data
00:10:22.460  ============================
00:10:22.460  Zone Append Size Limit:      0
00:10:22.460  
00:10:22.460  
00:10:22.460  Active Namespaces
00:10:22.460  =================
00:10:22.460  Namespace ID:1
00:10:22.460  Error Recovery Timeout:                Unlimited
00:10:22.460  Command Set Identifier:                NVM (00h)
00:10:22.460  Deallocate:                            Supported
00:10:22.460  Deallocated/Unwritten Error:           Supported
00:10:22.460  Deallocated Read Value:                All 0x00
00:10:22.460  Deallocate in Write Zeroes:            Not Supported
00:10:22.460  Deallocated Guard Field:               0xFFFF
00:10:22.460  Flush:                                 Supported
00:10:22.460  Reservation:                           Not Supported
00:10:22.460  Namespace Sharing Capabilities:        Private
00:10:22.460  Size (in LBAs):                        1310720 (5GiB)
00:10:22.460  Capacity (in LBAs):                    1310720 (5GiB)
00:10:22.460  Utilization (in LBAs):                 1310720 (5GiB)
00:10:22.460  Thin Provisioning:                     Not Supported
00:10:22.460  Per-NS Atomic Units:                   No
00:10:22.460  Maximum Single Source Range Length:    128
00:10:22.460  Maximum Copy Length:                   128
00:10:22.460  Maximum Source Range Count:            128
00:10:22.460  NGUID/EUI64 Never Reused:              No
00:10:22.460  Namespace Write Protected:             No
00:10:22.460  Number of LBA Formats:                 8
00:10:22.460  Current LBA Format:                    LBA Format #04
00:10:22.460  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:22.460  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:22.460  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:22.460  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:22.460  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:22.460  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:22.460  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:22.460  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:22.460  
00:10:22.460  NVM Specific Namespace Data
00:10:22.460  ===========================
00:10:22.460  Logical Block Storage Tag Mask:               0
00:10:22.460  Protection Information Capabilities:
00:10:22.460    16b Guard Protection Information Storage Tag Support:  No
00:10:22.460    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:22.460    Storage Tag Check Read Support:                        No
00:10:22.460  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.460   14:21:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:22.460   14:21:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0
00:10:22.719  =====================================================
00:10:22.719  NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:22.719  =====================================================
00:10:22.719  Controller Capabilities/Features
00:10:22.719  ================================
00:10:22.719  Vendor ID:                             1b36
00:10:22.719  Subsystem Vendor ID:                   1af4
00:10:22.719  Serial Number:                         12342
00:10:22.719  Model Number:                          QEMU NVMe Ctrl
00:10:22.719  Firmware Version:                      8.0.0
00:10:22.719  Recommended Arb Burst:                 6
00:10:22.719  IEEE OUI Identifier:                   00 54 52
00:10:22.719  Multi-path I/O
00:10:22.719    May have multiple subsystem ports:   No
00:10:22.719    May have multiple controllers:       No
00:10:22.719    Associated with SR-IOV VF:           No
00:10:22.719  Max Data Transfer Size:                524288
00:10:22.719  Max Number of Namespaces:              256
00:10:22.719  Max Number of I/O Queues:              64
00:10:22.719  NVMe Specification Version (VS):       1.4
00:10:22.719  NVMe Specification Version (Identify): 1.4
00:10:22.719  Maximum Queue Entries:                 2048
00:10:22.719  Contiguous Queues Required:            Yes
00:10:22.719  Arbitration Mechanisms Supported
00:10:22.719    Weighted Round Robin:                Not Supported
00:10:22.719    Vendor Specific:                     Not Supported
00:10:22.719  Reset Timeout:                         7500 ms
00:10:22.719  Doorbell Stride:                       4 bytes
00:10:22.719  NVM Subsystem Reset:                   Not Supported
00:10:22.719  Command Sets Supported
00:10:22.719    NVM Command Set:                     Supported
00:10:22.719  Boot Partition:                        Not Supported
00:10:22.719  Memory Page Size Minimum:              4096 bytes
00:10:22.719  Memory Page Size Maximum:              65536 bytes
00:10:22.719  Persistent Memory Region:              Not Supported
00:10:22.719  Optional Asynchronous Events Supported
00:10:22.719    Namespace Attribute Notices:         Supported
00:10:22.719    Firmware Activation Notices:         Not Supported
00:10:22.719    ANA Change Notices:                  Not Supported
00:10:22.719    PLE Aggregate Log Change Notices:    Not Supported
00:10:22.719    LBA Status Info Alert Notices:       Not Supported
00:10:22.719    EGE Aggregate Log Change Notices:    Not Supported
00:10:22.719    Normal NVM Subsystem Shutdown event: Not Supported
00:10:22.719    Zone Descriptor Change Notices:      Not Supported
00:10:22.719    Discovery Log Change Notices:        Not Supported
00:10:22.719  Controller Attributes
00:10:22.719    128-bit Host Identifier:             Not Supported
00:10:22.719    Non-Operational Permissive Mode:     Not Supported
00:10:22.719    NVM Sets:                            Not Supported
00:10:22.719    Read Recovery Levels:                Not Supported
00:10:22.719    Endurance Groups:                    Not Supported
00:10:22.719    Predictable Latency Mode:            Not Supported
00:10:22.719    Traffic Based Keep ALive:            Not Supported
00:10:22.719    Namespace Granularity:               Not Supported
00:10:22.719    SQ Associations:                     Not Supported
00:10:22.719    UUID List:                           Not Supported
00:10:22.719    Multi-Domain Subsystem:              Not Supported
00:10:22.719    Fixed Capacity Management:           Not Supported
00:10:22.719    Variable Capacity Management:        Not Supported
00:10:22.719    Delete Endurance Group:              Not Supported
00:10:22.719    Delete NVM Set:                      Not Supported
00:10:22.719    Extended LBA Formats Supported:      Supported
00:10:22.719    Flexible Data Placement Supported:   Not Supported
00:10:22.719  
00:10:22.719  Controller Memory Buffer Support
00:10:22.719  ================================
00:10:22.719  Supported:                             No
00:10:22.719  
00:10:22.719  Persistent Memory Region Support
00:10:22.719  ================================
00:10:22.719  Supported:                             No
00:10:22.719  
00:10:22.719  Admin Command Set Attributes
00:10:22.719  ============================
00:10:22.719  Security Send/Receive:                 Not Supported
00:10:22.719  Format NVM:                            Supported
00:10:22.719  Firmware Activate/Download:            Not Supported
00:10:22.719  Namespace Management:                  Supported
00:10:22.719  Device Self-Test:                      Not Supported
00:10:22.719  Directives:                            Supported
00:10:22.719  NVMe-MI:                               Not Supported
00:10:22.719  Virtualization Management:             Not Supported
00:10:22.719  Doorbell Buffer Config:                Supported
00:10:22.719  Get LBA Status Capability:             Not Supported
00:10:22.719  Command & Feature Lockdown Capability: Not Supported
00:10:22.719  Abort Command Limit:                   4
00:10:22.719  Async Event Request Limit:             4
00:10:22.719  Number of Firmware Slots:              N/A
00:10:22.719  Firmware Slot 1 Read-Only:             N/A
00:10:22.719  Firmware Activation Without Reset:     N/A
00:10:22.719  Multiple Update Detection Support:     N/A
00:10:22.719  Firmware Update Granularity:           No Information Provided
00:10:22.719  Per-Namespace SMART Log:               Yes
00:10:22.719  Asymmetric Namespace Access Log Page:  Not Supported
00:10:22.719  Subsystem NQN:                         nqn.2019-08.org.qemu:12342
00:10:22.719  Command Effects Log Page:              Supported
00:10:22.719  Get Log Page Extended Data:            Supported
00:10:22.719  Telemetry Log Pages:                   Not Supported
00:10:22.719  Persistent Event Log Pages:            Not Supported
00:10:22.719  Supported Log Pages Log Page:          May Support
00:10:22.719  Commands Supported & Effects Log Page: Not Supported
00:10:22.719  Feature Identifiers & Effects Log Page:May Support
00:10:22.719  NVMe-MI Commands & Effects Log Page:   May Support
00:10:22.719  Data Area 4 for Telemetry Log:         Not Supported
00:10:22.719  Error Log Page Entries Supported:      1
00:10:22.719  Keep Alive:                            Not Supported
00:10:22.719  
00:10:22.719  NVM Command Set Attributes
00:10:22.719  ==========================
00:10:22.719  Submission Queue Entry Size
00:10:22.719    Max:                       64
00:10:22.719    Min:                       64
00:10:22.719  Completion Queue Entry Size
00:10:22.719    Max:                       16
00:10:22.719    Min:                       16
00:10:22.719  Number of Namespaces:        256
00:10:22.719  Compare Command:             Supported
00:10:22.719  Write Uncorrectable Command: Not Supported
00:10:22.719  Dataset Management Command:  Supported
00:10:22.719  Write Zeroes Command:        Supported
00:10:22.719  Set Features Save Field:     Supported
00:10:22.719  Reservations:                Not Supported
00:10:22.719  Timestamp:                   Supported
00:10:22.719  Copy:                        Supported
00:10:22.719  Volatile Write Cache:        Present
00:10:22.719  Atomic Write Unit (Normal):  1
00:10:22.719  Atomic Write Unit (PFail):   1
00:10:22.719  Atomic Compare & Write Unit: 1
00:10:22.719  Fused Compare & Write:       Not Supported
00:10:22.719  Scatter-Gather List
00:10:22.719    SGL Command Set:           Supported
00:10:22.719    SGL Keyed:                 Not Supported
00:10:22.719    SGL Bit Bucket Descriptor: Not Supported
00:10:22.719    SGL Metadata Pointer:      Not Supported
00:10:22.719    Oversized SGL:             Not Supported
00:10:22.719    SGL Metadata Address:      Not Supported
00:10:22.719    SGL Offset:                Not Supported
00:10:22.719    Transport SGL Data Block:  Not Supported
00:10:22.719  Replay Protected Memory Block:  Not Supported
00:10:22.719  
00:10:22.719  Firmware Slot Information
00:10:22.719  =========================
00:10:22.719  Active slot:                 1
00:10:22.719  Slot 1 Firmware Revision:    1.0
00:10:22.719  
00:10:22.719  
00:10:22.719  Commands Supported and Effects
00:10:22.719  ==============================
00:10:22.719  Admin Commands
00:10:22.719  --------------
00:10:22.719     Delete I/O Submission Queue (00h): Supported 
00:10:22.719     Create I/O Submission Queue (01h): Supported 
00:10:22.719                    Get Log Page (02h): Supported 
00:10:22.719     Delete I/O Completion Queue (04h): Supported 
00:10:22.719     Create I/O Completion Queue (05h): Supported 
00:10:22.719                        Identify (06h): Supported 
00:10:22.719                           Abort (08h): Supported 
00:10:22.719                    Set Features (09h): Supported 
00:10:22.719                    Get Features (0Ah): Supported 
00:10:22.719      Asynchronous Event Request (0Ch): Supported 
00:10:22.719            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:22.719                  Directive Send (19h): Supported 
00:10:22.719               Directive Receive (1Ah): Supported 
00:10:22.719       Virtualization Management (1Ch): Supported 
00:10:22.719          Doorbell Buffer Config (7Ch): Supported 
00:10:22.720                      Format NVM (80h): Supported LBA-Change 
00:10:22.720  I/O Commands
00:10:22.720  ------------
00:10:22.720                           Flush (00h): Supported LBA-Change 
00:10:22.720                           Write (01h): Supported LBA-Change 
00:10:22.720                            Read (02h): Supported 
00:10:22.720                         Compare (05h): Supported 
00:10:22.720                    Write Zeroes (08h): Supported LBA-Change 
00:10:22.720              Dataset Management (09h): Supported LBA-Change 
00:10:22.720                         Unknown (0Ch): Supported 
00:10:22.720                         Unknown (12h): Supported 
00:10:22.720                            Copy (19h): Supported LBA-Change 
00:10:22.720                         Unknown (1Dh): Supported LBA-Change 
00:10:22.720  
00:10:22.720  Error Log
00:10:22.720  =========
00:10:22.720  
00:10:22.720  Arbitration
00:10:22.720  ===========
00:10:22.720  Arbitration Burst:           no limit
00:10:22.720  
00:10:22.720  Power Management
00:10:22.720  ================
00:10:22.720  Number of Power States:          1
00:10:22.720  Current Power State:             Power State #0
00:10:22.720  Power State #0:
00:10:22.720    Max Power:                     25.00 W
00:10:22.720    Non-Operational State:         Operational
00:10:22.720    Entry Latency:                 16 microseconds
00:10:22.720    Exit Latency:                  4 microseconds
00:10:22.720    Relative Read Throughput:      0
00:10:22.720    Relative Read Latency:         0
00:10:22.720    Relative Write Throughput:     0
00:10:22.720    Relative Write Latency:        0
00:10:22.720    Idle Power:                     Not Reported
00:10:22.720    Active Power:                   Not Reported
00:10:22.720  Non-Operational Permissive Mode: Not Supported
00:10:22.720  
00:10:22.720  Health Information
00:10:22.720  ==================
00:10:22.720  Critical Warnings:
00:10:22.720    Available Spare Space:     OK
00:10:22.720    Temperature:               OK
00:10:22.720    Device Reliability:        OK
00:10:22.720    Read Only:                 No
00:10:22.720    Volatile Memory Backup:    OK
00:10:22.720  Current Temperature:         323 Kelvin (50 Celsius)
00:10:22.720  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:22.720  Available Spare:             0%
00:10:22.720  Available Spare Threshold:   0%
00:10:22.720  Life Percentage Used:        0%
00:10:22.720  Data Units Read:             2123
00:10:22.720  Data Units Written:          1910
00:10:22.720  Host Read Commands:          104247
00:10:22.720  Host Write Commands:         102516
00:10:22.720  Controller Busy Time:        0 minutes
00:10:22.720  Power Cycles:                0
00:10:22.720  Power On Hours:              0 hours
00:10:22.720  Unsafe Shutdowns:            0
00:10:22.720  Unrecoverable Media Errors:  0
00:10:22.720  Lifetime Error Log Entries:  0
00:10:22.720  Warning Temperature Time:    0 minutes
00:10:22.720  Critical Temperature Time:   0 minutes
00:10:22.720  
00:10:22.720  Number of Queues
00:10:22.720  ================
00:10:22.720  Number of I/O Submission Queues:      64
00:10:22.720  Number of I/O Completion Queues:      64
00:10:22.720  
00:10:22.720  ZNS Specific Controller Data
00:10:22.720  ============================
00:10:22.720  Zone Append Size Limit:      0
00:10:22.720  
00:10:22.720  
00:10:22.720  Active Namespaces
00:10:22.720  =================
00:10:22.720  Namespace ID:1
00:10:22.720  Error Recovery Timeout:                Unlimited
00:10:22.720  Command Set Identifier:                NVM (00h)
00:10:22.720  Deallocate:                            Supported
00:10:22.720  Deallocated/Unwritten Error:           Supported
00:10:22.720  Deallocated Read Value:                All 0x00
00:10:22.720  Deallocate in Write Zeroes:            Not Supported
00:10:22.720  Deallocated Guard Field:               0xFFFF
00:10:22.720  Flush:                                 Supported
00:10:22.720  Reservation:                           Not Supported
00:10:22.720  Namespace Sharing Capabilities:        Private
00:10:22.720  Size (in LBAs):                        1048576 (4GiB)
00:10:22.720  Capacity (in LBAs):                    1048576 (4GiB)
00:10:22.720  Utilization (in LBAs):                 1048576 (4GiB)
00:10:22.720  Thin Provisioning:                     Not Supported
00:10:22.720  Per-NS Atomic Units:                   No
00:10:22.720  Maximum Single Source Range Length:    128
00:10:22.720  Maximum Copy Length:                   128
00:10:22.720  Maximum Source Range Count:            128
00:10:22.720  NGUID/EUI64 Never Reused:              No
00:10:22.720  Namespace Write Protected:             No
00:10:22.720  Number of LBA Formats:                 8
00:10:22.720  Current LBA Format:                    LBA Format #04
00:10:22.720  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:22.720  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:22.720  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:22.720  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:22.720  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:22.720  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:22.720  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:22.720  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:22.720  
00:10:22.720  NVM Specific Namespace Data
00:10:22.720  ===========================
00:10:22.720  Logical Block Storage Tag Mask:               0
00:10:22.720  Protection Information Capabilities:
00:10:22.720    16b Guard Protection Information Storage Tag Support:  No
00:10:22.720    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:22.720    Storage Tag Check Read Support:                        No
00:10:22.720  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Namespace ID:2
00:10:22.720  Error Recovery Timeout:                Unlimited
00:10:22.720  Command Set Identifier:                NVM (00h)
00:10:22.720  Deallocate:                            Supported
00:10:22.720  Deallocated/Unwritten Error:           Supported
00:10:22.720  Deallocated Read Value:                All 0x00
00:10:22.720  Deallocate in Write Zeroes:            Not Supported
00:10:22.720  Deallocated Guard Field:               0xFFFF
00:10:22.720  Flush:                                 Supported
00:10:22.720  Reservation:                           Not Supported
00:10:22.720  Namespace Sharing Capabilities:        Private
00:10:22.720  Size (in LBAs):                        1048576 (4GiB)
00:10:22.720  Capacity (in LBAs):                    1048576 (4GiB)
00:10:22.720  Utilization (in LBAs):                 1048576 (4GiB)
00:10:22.720  Thin Provisioning:                     Not Supported
00:10:22.720  Per-NS Atomic Units:                   No
00:10:22.720  Maximum Single Source Range Length:    128
00:10:22.720  Maximum Copy Length:                   128
00:10:22.720  Maximum Source Range Count:            128
00:10:22.720  NGUID/EUI64 Never Reused:              No
00:10:22.720  Namespace Write Protected:             No
00:10:22.720  Number of LBA Formats:                 8
00:10:22.720  Current LBA Format:                    LBA Format #04
00:10:22.720  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:22.720  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:22.720  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:22.720  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:22.720  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:22.720  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:22.720  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:22.720  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:22.720  
00:10:22.720  NVM Specific Namespace Data
00:10:22.720  ===========================
00:10:22.720  Logical Block Storage Tag Mask:               0
00:10:22.720  Protection Information Capabilities:
00:10:22.720    16b Guard Protection Information Storage Tag Support:  No
00:10:22.720    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:22.720    Storage Tag Check Read Support:                        No
00:10:22.720  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.720  Namespace ID:3
00:10:22.720  Error Recovery Timeout:                Unlimited
00:10:22.720  Command Set Identifier:                NVM (00h)
00:10:22.720  Deallocate:                            Supported
00:10:22.720  Deallocated/Unwritten Error:           Supported
00:10:22.720  Deallocated Read Value:                All 0x00
00:10:22.720  Deallocate in Write Zeroes:            Not Supported
00:10:22.720  Deallocated Guard Field:               0xFFFF
00:10:22.720  Flush:                                 Supported
00:10:22.720  Reservation:                           Not Supported
00:10:22.720  Namespace Sharing Capabilities:        Private
00:10:22.720  Size (in LBAs):                        1048576 (4GiB)
00:10:22.720  Capacity (in LBAs):                    1048576 (4GiB)
00:10:22.720  Utilization (in LBAs):                 1048576 (4GiB)
00:10:22.720  Thin Provisioning:                     Not Supported
00:10:22.720  Per-NS Atomic Units:                   No
00:10:22.720  Maximum Single Source Range Length:    128
00:10:22.720  Maximum Copy Length:                   128
00:10:22.720  Maximum Source Range Count:            128
00:10:22.720  NGUID/EUI64 Never Reused:              No
00:10:22.720  Namespace Write Protected:             No
00:10:22.720  Number of LBA Formats:                 8
00:10:22.720  Current LBA Format:                    LBA Format #04
00:10:22.720  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:22.721  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:22.721  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:22.721  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:22.721  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:22.721  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:22.721  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:22.721  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:22.721  
00:10:22.721  NVM Specific Namespace Data
00:10:22.721  ===========================
00:10:22.721  Logical Block Storage Tag Mask:               0
00:10:22.721  Protection Information Capabilities:
00:10:22.721    16b Guard Protection Information Storage Tag Support:  No
00:10:22.721    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:22.721    Storage Tag Check Read Support:                        No
00:10:22.721  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.721   14:21:01 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:10:22.721   14:21:01 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0
00:10:22.979  =====================================================
00:10:22.980  NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:22.980  =====================================================
00:10:22.980  Controller Capabilities/Features
00:10:22.980  ================================
00:10:22.980  Vendor ID:                             1b36
00:10:22.980  Subsystem Vendor ID:                   1af4
00:10:22.980  Serial Number:                         12343
00:10:22.980  Model Number:                          QEMU NVMe Ctrl
00:10:22.980  Firmware Version:                      8.0.0
00:10:22.980  Recommended Arb Burst:                 6
00:10:22.980  IEEE OUI Identifier:                   00 54 52
00:10:22.980  Multi-path I/O
00:10:22.980    May have multiple subsystem ports:   No
00:10:22.980    May have multiple controllers:       Yes
00:10:22.980    Associated with SR-IOV VF:           No
00:10:22.980  Max Data Transfer Size:                524288
00:10:22.980  Max Number of Namespaces:              256
00:10:22.980  Max Number of I/O Queues:              64
00:10:22.980  NVMe Specification Version (VS):       1.4
00:10:22.980  NVMe Specification Version (Identify): 1.4
00:10:22.980  Maximum Queue Entries:                 2048
00:10:22.980  Contiguous Queues Required:            Yes
00:10:22.980  Arbitration Mechanisms Supported
00:10:22.980    Weighted Round Robin:                Not Supported
00:10:22.980    Vendor Specific:                     Not Supported
00:10:22.980  Reset Timeout:                         7500 ms
00:10:22.980  Doorbell Stride:                       4 bytes
00:10:22.980  NVM Subsystem Reset:                   Not Supported
00:10:22.980  Command Sets Supported
00:10:22.980    NVM Command Set:                     Supported
00:10:22.980  Boot Partition:                        Not Supported
00:10:22.980  Memory Page Size Minimum:              4096 bytes
00:10:22.980  Memory Page Size Maximum:              65536 bytes
00:10:22.980  Persistent Memory Region:              Not Supported
00:10:22.980  Optional Asynchronous Events Supported
00:10:22.980    Namespace Attribute Notices:         Supported
00:10:22.980    Firmware Activation Notices:         Not Supported
00:10:22.980    ANA Change Notices:                  Not Supported
00:10:22.980    PLE Aggregate Log Change Notices:    Not Supported
00:10:22.980    LBA Status Info Alert Notices:       Not Supported
00:10:22.980    EGE Aggregate Log Change Notices:    Not Supported
00:10:22.980    Normal NVM Subsystem Shutdown event: Not Supported
00:10:22.980    Zone Descriptor Change Notices:      Not Supported
00:10:22.980    Discovery Log Change Notices:        Not Supported
00:10:22.980  Controller Attributes
00:10:22.980    128-bit Host Identifier:             Not Supported
00:10:22.980    Non-Operational Permissive Mode:     Not Supported
00:10:22.980    NVM Sets:                            Not Supported
00:10:22.980    Read Recovery Levels:                Not Supported
00:10:22.980    Endurance Groups:                    Supported
00:10:22.980    Predictable Latency Mode:            Not Supported
00:10:22.980    Traffic Based Keep ALive:            Not Supported
00:10:22.980    Namespace Granularity:               Not Supported
00:10:22.980    SQ Associations:                     Not Supported
00:10:22.980    UUID List:                           Not Supported
00:10:22.980    Multi-Domain Subsystem:              Not Supported
00:10:22.980    Fixed Capacity Management:           Not Supported
00:10:22.980    Variable Capacity Management:        Not Supported
00:10:22.980    Delete Endurance Group:              Not Supported
00:10:22.980    Delete NVM Set:                      Not Supported
00:10:22.980    Extended LBA Formats Supported:      Supported
00:10:22.980    Flexible Data Placement Supported:   Supported
00:10:22.980  
00:10:22.980  Controller Memory Buffer Support
00:10:22.980  ================================
00:10:22.980  Supported:                             No
00:10:22.980  
00:10:22.980  Persistent Memory Region Support
00:10:22.980  ================================
00:10:22.980  Supported:                             No
00:10:22.980  
00:10:22.980  Admin Command Set Attributes
00:10:22.980  ============================
00:10:22.980  Security Send/Receive:                 Not Supported
00:10:22.980  Format NVM:                            Supported
00:10:22.980  Firmware Activate/Download:            Not Supported
00:10:22.980  Namespace Management:                  Supported
00:10:22.980  Device Self-Test:                      Not Supported
00:10:22.980  Directives:                            Supported
00:10:22.980  NVMe-MI:                               Not Supported
00:10:22.980  Virtualization Management:             Not Supported
00:10:22.980  Doorbell Buffer Config:                Supported
00:10:22.980  Get LBA Status Capability:             Not Supported
00:10:22.980  Command & Feature Lockdown Capability: Not Supported
00:10:22.980  Abort Command Limit:                   4
00:10:22.980  Async Event Request Limit:             4
00:10:22.980  Number of Firmware Slots:              N/A
00:10:22.980  Firmware Slot 1 Read-Only:             N/A
00:10:22.980  Firmware Activation Without Reset:     N/A
00:10:22.980  Multiple Update Detection Support:     N/A
00:10:22.980  Firmware Update Granularity:           No Information Provided
00:10:22.980  Per-Namespace SMART Log:               Yes
00:10:22.980  Asymmetric Namespace Access Log Page:  Not Supported
00:10:22.980  Subsystem NQN:                         nqn.2019-08.org.qemu:fdp-subsys3
00:10:22.980  Command Effects Log Page:              Supported
00:10:22.980  Get Log Page Extended Data:            Supported
00:10:22.980  Telemetry Log Pages:                   Not Supported
00:10:22.980  Persistent Event Log Pages:            Not Supported
00:10:22.980  Supported Log Pages Log Page:          May Support
00:10:22.980  Commands Supported & Effects Log Page: Not Supported
00:10:22.980  Feature Identifiers & Effects Log Page:May Support
00:10:22.980  NVMe-MI Commands & Effects Log Page:   May Support
00:10:22.980  Data Area 4 for Telemetry Log:         Not Supported
00:10:22.980  Error Log Page Entries Supported:      1
00:10:22.980  Keep Alive:                            Not Supported
00:10:22.980  
00:10:22.980  NVM Command Set Attributes
00:10:22.980  ==========================
00:10:22.980  Submission Queue Entry Size
00:10:22.980    Max:                       64
00:10:22.980    Min:                       64
00:10:22.980  Completion Queue Entry Size
00:10:22.980    Max:                       16
00:10:22.980    Min:                       16
00:10:22.980  Number of Namespaces:        256
00:10:22.980  Compare Command:             Supported
00:10:22.980  Write Uncorrectable Command: Not Supported
00:10:22.980  Dataset Management Command:  Supported
00:10:22.980  Write Zeroes Command:        Supported
00:10:22.980  Set Features Save Field:     Supported
00:10:22.980  Reservations:                Not Supported
00:10:22.980  Timestamp:                   Supported
00:10:22.980  Copy:                        Supported
00:10:22.980  Volatile Write Cache:        Present
00:10:22.980  Atomic Write Unit (Normal):  1
00:10:22.980  Atomic Write Unit (PFail):   1
00:10:22.980  Atomic Compare & Write Unit: 1
00:10:22.980  Fused Compare & Write:       Not Supported
00:10:22.980  Scatter-Gather List
00:10:22.980    SGL Command Set:           Supported
00:10:22.980    SGL Keyed:                 Not Supported
00:10:22.980    SGL Bit Bucket Descriptor: Not Supported
00:10:22.980    SGL Metadata Pointer:      Not Supported
00:10:22.980    Oversized SGL:             Not Supported
00:10:22.980    SGL Metadata Address:      Not Supported
00:10:22.980    SGL Offset:                Not Supported
00:10:22.980    Transport SGL Data Block:  Not Supported
00:10:22.980  Replay Protected Memory Block:  Not Supported
00:10:22.980  
00:10:22.980  Firmware Slot Information
00:10:22.980  =========================
00:10:22.980  Active slot:                 1
00:10:22.980  Slot 1 Firmware Revision:    1.0
00:10:22.980  
00:10:22.980  
00:10:22.980  Commands Supported and Effects
00:10:22.980  ==============================
00:10:22.980  Admin Commands
00:10:22.980  --------------
00:10:22.980     Delete I/O Submission Queue (00h): Supported 
00:10:22.980     Create I/O Submission Queue (01h): Supported 
00:10:22.980                    Get Log Page (02h): Supported 
00:10:22.980     Delete I/O Completion Queue (04h): Supported 
00:10:22.980     Create I/O Completion Queue (05h): Supported 
00:10:22.980                        Identify (06h): Supported 
00:10:22.980                           Abort (08h): Supported 
00:10:22.980                    Set Features (09h): Supported 
00:10:22.980                    Get Features (0Ah): Supported 
00:10:22.980      Asynchronous Event Request (0Ch): Supported 
00:10:22.980            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:10:22.980                  Directive Send (19h): Supported 
00:10:22.980               Directive Receive (1Ah): Supported 
00:10:22.980       Virtualization Management (1Ch): Supported 
00:10:22.980          Doorbell Buffer Config (7Ch): Supported 
00:10:22.980                      Format NVM (80h): Supported LBA-Change 
00:10:22.980  I/O Commands
00:10:22.980  ------------
00:10:22.980                           Flush (00h): Supported LBA-Change 
00:10:22.980                           Write (01h): Supported LBA-Change 
00:10:22.980                            Read (02h): Supported 
00:10:22.980                         Compare (05h): Supported 
00:10:22.980                    Write Zeroes (08h): Supported LBA-Change 
00:10:22.980              Dataset Management (09h): Supported LBA-Change 
00:10:22.980                         Unknown (0Ch): Supported 
00:10:22.980                         Unknown (12h): Supported 
00:10:22.980                            Copy (19h): Supported LBA-Change 
00:10:22.980                         Unknown (1Dh): Supported LBA-Change 
00:10:22.980  
00:10:22.980  Error Log
00:10:22.980  =========
00:10:22.980  
00:10:22.980  Arbitration
00:10:22.980  ===========
00:10:22.980  Arbitration Burst:           no limit
00:10:22.980  
00:10:22.980  Power Management
00:10:22.980  ================
00:10:22.980  Number of Power States:          1
00:10:22.980  Current Power State:             Power State #0
00:10:22.980  Power State #0:
00:10:22.980    Max Power:                     25.00 W
00:10:22.980    Non-Operational State:         Operational
00:10:22.980    Entry Latency:                 16 microseconds
00:10:22.980    Exit Latency:                  4 microseconds
00:10:22.980    Relative Read Throughput:      0
00:10:22.980    Relative Read Latency:         0
00:10:22.980    Relative Write Throughput:     0
00:10:22.980    Relative Write Latency:        0
00:10:22.980    Idle Power:                     Not Reported
00:10:22.981    Active Power:                   Not Reported
00:10:22.981  Non-Operational Permissive Mode: Not Supported
00:10:22.981  
00:10:22.981  Health Information
00:10:22.981  ==================
00:10:22.981  Critical Warnings:
00:10:22.981    Available Spare Space:     OK
00:10:22.981    Temperature:               OK
00:10:22.981    Device Reliability:        OK
00:10:22.981    Read Only:                 No
00:10:22.981    Volatile Memory Backup:    OK
00:10:22.981  Current Temperature:         323 Kelvin (50 Celsius)
00:10:22.981  Temperature Threshold:       343 Kelvin (70 Celsius)
00:10:22.981  Available Spare:             0%
00:10:22.981  Available Spare Threshold:   0%
00:10:22.981  Life Percentage Used:        0%
00:10:22.981  Data Units Read:             750
00:10:22.981  Data Units Written:          679
00:10:22.981  Host Read Commands:          35099
00:10:22.981  Host Write Commands:         34522
00:10:22.981  Controller Busy Time:        0 minutes
00:10:22.981  Power Cycles:                0
00:10:22.981  Power On Hours:              0 hours
00:10:22.981  Unsafe Shutdowns:            0
00:10:22.981  Unrecoverable Media Errors:  0
00:10:22.981  Lifetime Error Log Entries:  0
00:10:22.981  Warning Temperature Time:    0 minutes
00:10:22.981  Critical Temperature Time:   0 minutes
00:10:22.981  
00:10:22.981  Number of Queues
00:10:22.981  ================
00:10:22.981  Number of I/O Submission Queues:      64
00:10:22.981  Number of I/O Completion Queues:      64
00:10:22.981  
00:10:22.981  ZNS Specific Controller Data
00:10:22.981  ============================
00:10:22.981  Zone Append Size Limit:      0
00:10:22.981  
00:10:22.981  
00:10:22.981  Active Namespaces
00:10:22.981  =================
00:10:22.981  Namespace ID:1
00:10:22.981  Error Recovery Timeout:                Unlimited
00:10:22.981  Command Set Identifier:                NVM (00h)
00:10:22.981  Deallocate:                            Supported
00:10:22.981  Deallocated/Unwritten Error:           Supported
00:10:22.981  Deallocated Read Value:                All 0x00
00:10:22.981  Deallocate in Write Zeroes:            Not Supported
00:10:22.981  Deallocated Guard Field:               0xFFFF
00:10:22.981  Flush:                                 Supported
00:10:22.981  Reservation:                           Not Supported
00:10:22.981  Namespace Sharing Capabilities:        Multiple Controllers
00:10:22.981  Size (in LBAs):                        262144 (1GiB)
00:10:22.981  Capacity (in LBAs):                    262144 (1GiB)
00:10:22.981  Utilization (in LBAs):                 262144 (1GiB)
00:10:22.981  Thin Provisioning:                     Not Supported
00:10:22.981  Per-NS Atomic Units:                   No
00:10:22.981  Maximum Single Source Range Length:    128
00:10:22.981  Maximum Copy Length:                   128
00:10:22.981  Maximum Source Range Count:            128
00:10:22.981  NGUID/EUI64 Never Reused:              No
00:10:22.981  Namespace Write Protected:             No
00:10:22.981  Endurance group ID:                    1
00:10:22.981  Number of LBA Formats:                 8
00:10:22.981  Current LBA Format:                    LBA Format #04
00:10:22.981  LBA Format #00: Data Size:   512  Metadata Size:     0
00:10:22.981  LBA Format #01: Data Size:   512  Metadata Size:     8
00:10:22.981  LBA Format #02: Data Size:   512  Metadata Size:    16
00:10:22.981  LBA Format #03: Data Size:   512  Metadata Size:    64
00:10:22.981  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:10:22.981  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:10:22.981  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:10:22.981  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:10:22.981  
00:10:22.981  Get Feature FDP:
00:10:22.981  ================
00:10:22.981    Enabled:                 Yes
00:10:22.981    FDP configuration index: 0
00:10:22.981  
00:10:22.981  FDP configurations log page
00:10:22.981  ===========================
00:10:22.981  Number of FDP configurations:         1
00:10:22.981  Version:                              0
00:10:22.981  Size:                                 112
00:10:22.981  FDP Configuration Descriptor:         0
00:10:22.981    Descriptor Size:                    96
00:10:22.981    Reclaim Group Identifier format:    2
00:10:22.981    FDP Volatile Write Cache:           Not Present
00:10:22.981    FDP Configuration:                  Valid
00:10:22.981    Vendor Specific Size:               0
00:10:22.981    Number of Reclaim Groups:           2
00:10:22.981    Number of Recalim Unit Handles:     8
00:10:22.981    Max Placement Identifiers:          128
00:10:22.981    Number of Namespaces Suppprted:     256
00:10:22.981    Reclaim unit Nominal Size:          6000000 bytes
00:10:22.981    Estimated Reclaim Unit Time Limit:  Not Reported
00:10:22.981      RUH Desc #000:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #001:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #002:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #003:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #004:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #005:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #006:          RUH Type: Initially Isolated
00:10:22.981      RUH Desc #007:          RUH Type: Initially Isolated
00:10:22.981  
00:10:22.981  FDP reclaim unit handle usage log page
00:10:22.981  ======================================
00:10:22.981  Number of Reclaim Unit Handles:       8
00:10:22.981    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:10:22.981    RUH Usage Desc #001:   RUH Attributes: Unused
00:10:22.981    RUH Usage Desc #002:   RUH Attributes: Unused
00:10:22.981    RUH Usage Desc #003:   RUH Attributes: Unused
00:10:22.981    RUH Usage Desc #004:   RUH Attributes: Unused
00:10:22.981    RUH Usage Desc #005:   RUH Attributes: Unused
00:10:22.981    RUH Usage Desc #006:   RUH Attributes: Unused
00:10:22.981    RUH Usage Desc #007:   RUH Attributes: Unused
00:10:22.981  
00:10:22.981  FDP statistics log page
00:10:22.981  =======================
00:10:22.981  Host bytes with metadata written:  414687232
00:10:22.981  Media bytes with metadata written: 414732288
00:10:22.981  Media bytes erased:                0
00:10:22.981  
00:10:22.981  FDP events log page
00:10:22.981  ===================
00:10:22.981  Number of FDP events:              0
00:10:22.981  
00:10:22.981  NVM Specific Namespace Data
00:10:22.981  ===========================
00:10:22.981  Logical Block Storage Tag Mask:               0
00:10:22.981  Protection Information Capabilities:
00:10:22.981    16b Guard Protection Information Storage Tag Support:  No
00:10:22.981    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:10:22.981    Storage Tag Check Read Support:                        No
00:10:22.981  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:10:22.981  
00:10:22.981  real	0m1.790s
00:10:22.981  user	0m0.712s
00:10:22.981  sys	0m0.854s
00:10:22.981   14:21:01 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:22.981   14:21:01 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:10:22.981  ************************************
00:10:22.981  END TEST nvme_identify
00:10:22.981  ************************************
00:10:22.981   14:21:01 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:10:22.981   14:21:01 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:22.981   14:21:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:22.981   14:21:01 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:22.981  ************************************
00:10:22.981  START TEST nvme_perf
00:10:22.981  ************************************
00:10:22.981   14:21:01 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:10:22.981   14:21:01 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:10:24.887  Initializing NVMe Controllers
00:10:24.887  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:24.887  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:24.887  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:24.887  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:24.887  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:10:24.887  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:10:24.887  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:10:24.887  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:10:24.887  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:10:24.887  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:10:24.887  Initialization complete. Launching workers.
00:10:24.887  ========================================================
00:10:24.887                                                                             Latency(us)
00:10:24.887  Device Information                     :       IOPS      MiB/s    Average        min        max
00:10:24.887  PCIE (0000:00:10.0) NSID 1 from core  0:   10668.33     125.02   11999.60    7229.60   52197.50
00:10:24.887  PCIE (0000:00:11.0) NSID 1 from core  0:   10668.33     125.02   11941.94    7327.94   47284.68
00:10:24.887  PCIE (0000:00:13.0) NSID 1 from core  0:   10668.33     125.02   11882.90    7319.50   42392.77
00:10:24.887  PCIE (0000:00:12.0) NSID 1 from core  0:   10668.33     125.02   11823.37    7281.17   37144.24
00:10:24.887  PCIE (0000:00:12.0) NSID 2 from core  0:   10668.33     125.02   11762.98    7282.21   31931.71
00:10:24.887  PCIE (0000:00:12.0) NSID 3 from core  0:   10668.33     125.02   11703.17    7281.59   26698.28
00:10:24.887  ========================================================
00:10:24.887  Total                                  :   64009.97     750.12   11852.33    7229.60   52197.50
00:10:24.887  
00:10:24.887  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:24.887  =================================================================================
00:10:24.887    1.00000% :  7685.585us
00:10:24.887   10.00000% :  8638.836us
00:10:24.887   25.00000% :  9353.775us
00:10:24.887   50.00000% : 10307.025us
00:10:24.887   75.00000% : 12392.262us
00:10:24.887   90.00000% : 19541.644us
00:10:24.887   95.00000% : 22043.927us
00:10:24.887   98.00000% : 24069.585us
00:10:24.887   99.00000% : 39321.600us
00:10:24.887   99.50000% : 48854.109us
00:10:24.887   99.90000% : 51713.862us
00:10:24.887   99.99000% : 52190.487us
00:10:24.887   99.99900% : 52428.800us
00:10:24.887   99.99990% : 52428.800us
00:10:24.887   99.99999% : 52428.800us
00:10:24.887  
00:10:24.887  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:24.887  =================================================================================
00:10:24.887    1.00000% :  7745.164us
00:10:24.887   10.00000% :  8638.836us
00:10:24.887   25.00000% :  9353.775us
00:10:24.887   50.00000% : 10307.025us
00:10:24.887   75.00000% : 12273.105us
00:10:24.887   90.00000% : 19660.800us
00:10:24.887   95.00000% : 21924.771us
00:10:24.887   98.00000% : 23712.116us
00:10:24.887   99.00000% : 35031.971us
00:10:24.887   99.50000% : 44087.855us
00:10:24.887   99.90000% : 46709.295us
00:10:24.887   99.99000% : 47424.233us
00:10:24.887   99.99900% : 47424.233us
00:10:24.887   99.99990% : 47424.233us
00:10:24.887   99.99999% : 47424.233us
00:10:24.887  
00:10:24.887  Summary latency data for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:24.887  =================================================================================
00:10:24.887    1.00000% :  7745.164us
00:10:24.887   10.00000% :  8638.836us
00:10:24.887   25.00000% :  9353.775us
00:10:24.887   50.00000% : 10307.025us
00:10:24.887   75.00000% : 12213.527us
00:10:24.887   90.00000% : 19303.331us
00:10:24.887   95.00000% : 22043.927us
00:10:24.887   98.00000% : 23712.116us
00:10:24.887   99.00000% : 30265.716us
00:10:24.887   99.50000% : 39559.913us
00:10:24.887   99.90000% : 41943.040us
00:10:24.887   99.99000% : 42419.665us
00:10:24.887   99.99900% : 42419.665us
00:10:24.887   99.99990% : 42419.665us
00:10:24.887   99.99999% : 42419.665us
00:10:24.887  
00:10:24.887  Summary latency data for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:24.887  =================================================================================
00:10:24.887    1.00000% :  7685.585us
00:10:24.887   10.00000% :  8698.415us
00:10:24.887   25.00000% :  9413.353us
00:10:24.887   50.00000% : 10307.025us
00:10:24.887   75.00000% : 12213.527us
00:10:24.887   90.00000% : 19065.018us
00:10:24.887   95.00000% : 22163.084us
00:10:24.887   98.00000% : 23473.804us
00:10:24.887   99.00000% : 25499.462us
00:10:24.887   99.50000% : 34317.033us
00:10:24.887   99.90000% : 36700.160us
00:10:24.887   99.99000% : 37176.785us
00:10:24.887   99.99900% : 37176.785us
00:10:24.887   99.99990% : 37176.785us
00:10:24.887   99.99999% : 37176.785us
00:10:24.887  
00:10:24.887  Summary latency data for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:24.887  =================================================================================
00:10:24.887    1.00000% :  7745.164us
00:10:24.887   10.00000% :  8698.415us
00:10:24.887   25.00000% :  9353.775us
00:10:24.887   50.00000% : 10307.025us
00:10:24.887   75.00000% : 12392.262us
00:10:24.887   90.00000% : 19660.800us
00:10:24.887   95.00000% : 21924.771us
00:10:24.887   98.00000% : 23116.335us
00:10:24.887   99.00000% : 23592.960us
00:10:24.887   99.50000% : 29074.153us
00:10:24.887   99.90000% : 31457.280us
00:10:24.887   99.99000% : 31933.905us
00:10:24.887   99.99900% : 31933.905us
00:10:24.887   99.99990% : 31933.905us
00:10:24.887   99.99999% : 31933.905us
00:10:24.887  
00:10:24.887  Summary latency data for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:24.887  =================================================================================
00:10:24.887    1.00000% :  7685.585us
00:10:24.887   10.00000% :  8698.415us
00:10:24.887   25.00000% :  9353.775us
00:10:24.887   50.00000% : 10307.025us
00:10:24.887   75.00000% : 12451.840us
00:10:24.887   90.00000% : 18826.705us
00:10:24.887   95.00000% : 21805.615us
00:10:24.887   98.00000% : 23116.335us
00:10:24.887   99.00000% : 23950.429us
00:10:24.887   99.50000% : 25141.993us
00:10:24.887   99.90000% : 26214.400us
00:10:24.887   99.99000% : 26691.025us
00:10:24.887   99.99900% : 26810.182us
00:10:24.887   99.99990% : 26810.182us
00:10:24.887   99.99999% : 26810.182us
00:10:24.887  
00:10:24.887  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:24.887  ==============================================================================
00:10:24.887         Range in us     Cumulative    IO count
00:10:24.887   7208.960 -  7238.749:    0.0281%  (        3)
00:10:24.887   7238.749 -  7268.538:    0.0468%  (        2)
00:10:24.887   7268.538 -  7298.327:    0.0749%  (        3)
00:10:24.887   7298.327 -  7328.116:    0.1029%  (        3)
00:10:24.887   7328.116 -  7357.905:    0.1591%  (        6)
00:10:24.887   7357.905 -  7387.695:    0.1684%  (        1)
00:10:24.887   7387.695 -  7417.484:    0.2339%  (        7)
00:10:24.887   7417.484 -  7447.273:    0.3181%  (        9)
00:10:24.887   7447.273 -  7477.062:    0.3930%  (        8)
00:10:24.887   7477.062 -  7506.851:    0.4585%  (        7)
00:10:24.887   7506.851 -  7536.640:    0.5614%  (       11)
00:10:24.887   7536.640 -  7566.429:    0.6456%  (        9)
00:10:24.887   7566.429 -  7596.218:    0.7766%  (       14)
00:10:24.887   7596.218 -  7626.007:    0.9076%  (       14)
00:10:24.887   7626.007 -  7685.585:    1.1602%  (       27)
00:10:24.887   7685.585 -  7745.164:    1.4689%  (       33)
00:10:24.887   7745.164 -  7804.742:    1.7871%  (       34)
00:10:24.887   7804.742 -  7864.320:    2.0865%  (       32)
00:10:24.887   7864.320 -  7923.898:    2.3671%  (       30)
00:10:24.887   7923.898 -  7983.476:    2.6946%  (       35)
00:10:24.887   7983.476 -  8043.055:    3.0314%  (       36)
00:10:24.887   8043.055 -  8102.633:    3.4618%  (       46)
00:10:24.887   8102.633 -  8162.211:    3.8922%  (       46)
00:10:24.887   8162.211 -  8221.789:    4.5097%  (       66)
00:10:24.887   8221.789 -  8281.367:    5.1834%  (       72)
00:10:24.887   8281.367 -  8340.945:    6.0909%  (       97)
00:10:24.887   8340.945 -  8400.524:    6.9049%  (       87)
00:10:24.887   8400.524 -  8460.102:    7.8780%  (      104)
00:10:24.887   8460.102 -  8519.680:    8.7575%  (       94)
00:10:24.887   8519.680 -  8579.258:    9.6931%  (      100)
00:10:24.887   8579.258 -  8638.836:   10.5632%  (       93)
00:10:24.887   8638.836 -  8698.415:   11.5269%  (      103)
00:10:24.887   8698.415 -  8757.993:   12.4813%  (      102)
00:10:24.887   8757.993 -  8817.571:   13.5479%  (      114)
00:10:24.887   8817.571 -  8877.149:   14.7549%  (      129)
00:10:24.887   8877.149 -  8936.727:   15.9993%  (      133)
00:10:24.887   8936.727 -  8996.305:   17.2436%  (      133)
00:10:24.887   8996.305 -  9055.884:   18.4974%  (      134)
00:10:24.887   9055.884 -  9115.462:   19.8915%  (      149)
00:10:24.887   9115.462 -  9175.040:   21.2668%  (      147)
00:10:24.887   9175.040 -  9234.618:   22.7358%  (      157)
00:10:24.887   9234.618 -  9294.196:   24.1766%  (      154)
00:10:24.887   9294.196 -  9353.775:   25.7391%  (      167)
00:10:24.887   9353.775 -  9413.353:   27.4233%  (      180)
00:10:24.887   9413.353 -  9472.931:   29.1168%  (      181)
00:10:24.888   9472.931 -  9532.509:   30.5951%  (      158)
00:10:24.888   9532.509 -  9592.087:   32.2698%  (      179)
00:10:24.888   9592.087 -  9651.665:   33.8604%  (      170)
00:10:24.888   9651.665 -  9711.244:   35.3481%  (      159)
00:10:24.888   9711.244 -  9770.822:   37.0322%  (      180)
00:10:24.888   9770.822 -  9830.400:   38.5666%  (      164)
00:10:24.888   9830.400 -  9889.978:   40.1198%  (      166)
00:10:24.888   9889.978 -  9949.556:   41.6074%  (      159)
00:10:24.888   9949.556 - 10009.135:   43.1793%  (      168)
00:10:24.888  10009.135 - 10068.713:   44.7605%  (      169)
00:10:24.888  10068.713 - 10128.291:   46.3043%  (      165)
00:10:24.888  10128.291 - 10187.869:   47.7451%  (      154)
00:10:24.888  10187.869 - 10247.447:   49.1766%  (      153)
00:10:24.888  10247.447 - 10307.025:   50.6362%  (      156)
00:10:24.888  10307.025 - 10366.604:   51.9555%  (      141)
00:10:24.888  10366.604 - 10426.182:   53.3028%  (      144)
00:10:24.888  10426.182 - 10485.760:   54.6126%  (      140)
00:10:24.888  10485.760 - 10545.338:   55.8757%  (      135)
00:10:24.888  10545.338 - 10604.916:   57.0921%  (      130)
00:10:24.888  10604.916 - 10664.495:   58.2522%  (      124)
00:10:24.888  10664.495 - 10724.073:   59.3376%  (      116)
00:10:24.888  10724.073 - 10783.651:   60.3948%  (      113)
00:10:24.888  10783.651 - 10843.229:   61.4427%  (      112)
00:10:24.888  10843.229 - 10902.807:   62.5000%  (      113)
00:10:24.888  10902.807 - 10962.385:   63.3608%  (       92)
00:10:24.888  10962.385 - 11021.964:   64.2122%  (       91)
00:10:24.888  11021.964 - 11081.542:   65.0543%  (       90)
00:10:24.888  11081.542 - 11141.120:   65.8215%  (       82)
00:10:24.888  11141.120 - 11200.698:   66.5138%  (       74)
00:10:24.888  11200.698 - 11260.276:   67.2436%  (       78)
00:10:24.888  11260.276 - 11319.855:   67.8331%  (       63)
00:10:24.888  11319.855 - 11379.433:   68.3851%  (       59)
00:10:24.888  11379.433 - 11439.011:   68.8903%  (       54)
00:10:24.888  11439.011 - 11498.589:   69.3394%  (       48)
00:10:24.888  11498.589 - 11558.167:   69.8634%  (       56)
00:10:24.888  11558.167 - 11617.745:   70.2938%  (       46)
00:10:24.888  11617.745 - 11677.324:   70.7242%  (       46)
00:10:24.888  11677.324 - 11736.902:   71.1171%  (       42)
00:10:24.888  11736.902 - 11796.480:   71.5288%  (       44)
00:10:24.888  11796.480 - 11856.058:   71.9405%  (       44)
00:10:24.888  11856.058 - 11915.636:   72.4083%  (       50)
00:10:24.888  11915.636 - 11975.215:   72.7732%  (       39)
00:10:24.888  11975.215 - 12034.793:   73.1662%  (       42)
00:10:24.888  12034.793 - 12094.371:   73.5591%  (       42)
00:10:24.888  12094.371 - 12153.949:   73.9240%  (       39)
00:10:24.888  12153.949 - 12213.527:   74.2796%  (       38)
00:10:24.888  12213.527 - 12273.105:   74.6164%  (       36)
00:10:24.888  12273.105 - 12332.684:   74.9251%  (       33)
00:10:24.888  12332.684 - 12392.262:   75.2713%  (       37)
00:10:24.888  12392.262 - 12451.840:   75.5333%  (       28)
00:10:24.888  12451.840 - 12511.418:   75.8514%  (       34)
00:10:24.888  12511.418 - 12570.996:   76.1508%  (       32)
00:10:24.888  12570.996 - 12630.575:   76.5064%  (       38)
00:10:24.888  12630.575 - 12690.153:   76.8806%  (       40)
00:10:24.888  12690.153 - 12749.731:   77.2642%  (       41)
00:10:24.888  12749.731 - 12809.309:   77.6291%  (       39)
00:10:24.888  12809.309 - 12868.887:   77.9004%  (       29)
00:10:24.888  12868.887 - 12928.465:   78.1811%  (       30)
00:10:24.888  12928.465 - 12988.044:   78.5180%  (       36)
00:10:24.888  12988.044 - 13047.622:   78.8548%  (       36)
00:10:24.888  13047.622 - 13107.200:   79.2010%  (       37)
00:10:24.888  13107.200 - 13166.778:   79.5191%  (       34)
00:10:24.888  13166.778 - 13226.356:   79.8185%  (       32)
00:10:24.888  13226.356 - 13285.935:   80.1272%  (       33)
00:10:24.888  13285.935 - 13345.513:   80.3892%  (       28)
00:10:24.888  13345.513 - 13405.091:   80.6886%  (       32)
00:10:24.888  13405.091 - 13464.669:   80.9600%  (       29)
00:10:24.888  13464.669 - 13524.247:   81.2219%  (       28)
00:10:24.888  13524.247 - 13583.825:   81.5213%  (       32)
00:10:24.888  13583.825 - 13643.404:   81.7833%  (       28)
00:10:24.888  13643.404 - 13702.982:   82.0734%  (       31)
00:10:24.888  13702.982 - 13762.560:   82.3540%  (       30)
00:10:24.888  13762.560 - 13822.138:   82.6722%  (       34)
00:10:24.888  13822.138 - 13881.716:   82.8686%  (       21)
00:10:24.888  13881.716 - 13941.295:   83.1025%  (       25)
00:10:24.888  13941.295 - 14000.873:   83.3177%  (       23)
00:10:24.888  14000.873 - 14060.451:   83.5142%  (       21)
00:10:24.888  14060.451 - 14120.029:   83.7107%  (       21)
00:10:24.888  14120.029 - 14179.607:   83.8978%  (       20)
00:10:24.888  14179.607 - 14239.185:   84.1130%  (       23)
00:10:24.888  14239.185 - 14298.764:   84.3282%  (       23)
00:10:24.888  14298.764 - 14358.342:   84.5247%  (       21)
00:10:24.888  14358.342 - 14417.920:   84.6838%  (       17)
00:10:24.888  14417.920 - 14477.498:   84.8709%  (       20)
00:10:24.888  14477.498 - 14537.076:   85.0206%  (       16)
00:10:24.888  14537.076 - 14596.655:   85.1516%  (       14)
00:10:24.888  14596.655 - 14656.233:   85.2358%  (        9)
00:10:24.888  14656.233 - 14715.811:   85.3481%  (       12)
00:10:24.888  14715.811 - 14775.389:   85.4603%  (       12)
00:10:24.888  14775.389 - 14834.967:   85.5820%  (       13)
00:10:24.888  14834.967 - 14894.545:   85.6942%  (       12)
00:10:24.888  14894.545 - 14954.124:   85.8252%  (       14)
00:10:24.888  14954.124 - 15013.702:   85.9656%  (       15)
00:10:24.888  15013.702 - 15073.280:   86.1246%  (       17)
00:10:24.888  15073.280 - 15132.858:   86.2369%  (       12)
00:10:24.888  15132.858 - 15192.436:   86.3866%  (       16)
00:10:24.888  15192.436 - 15252.015:   86.4802%  (       10)
00:10:24.888  15252.015 - 15371.171:   86.6860%  (       22)
00:10:24.888  15371.171 - 15490.327:   86.8638%  (       19)
00:10:24.888  15490.327 - 15609.484:   87.0041%  (       15)
00:10:24.888  15609.484 - 15728.640:   87.1164%  (       12)
00:10:24.888  15728.640 - 15847.796:   87.2380%  (       13)
00:10:24.888  15847.796 - 15966.953:   87.3503%  (       12)
00:10:24.888  15966.953 - 16086.109:   87.4626%  (       12)
00:10:24.888  16086.109 - 16205.265:   87.5561%  (       10)
00:10:24.888  16205.265 - 16324.422:   87.6216%  (        7)
00:10:24.888  16324.422 - 16443.578:   87.6778%  (        6)
00:10:24.888  16443.578 - 16562.735:   87.7152%  (        4)
00:10:24.888  16562.735 - 16681.891:   87.7620%  (        5)
00:10:24.888  16681.891 - 16801.047:   87.8275%  (        7)
00:10:24.888  16801.047 - 16920.204:   87.8743%  (        5)
00:10:24.888  16920.204 - 17039.360:   87.9304%  (        6)
00:10:24.888  17039.360 - 17158.516:   88.0146%  (        9)
00:10:24.888  17158.516 - 17277.673:   88.0894%  (        8)
00:10:24.888  17277.673 - 17396.829:   88.1643%  (        8)
00:10:24.888  17396.829 - 17515.985:   88.2485%  (        9)
00:10:24.888  17515.985 - 17635.142:   88.3327%  (        9)
00:10:24.888  17635.142 - 17754.298:   88.3888%  (        6)
00:10:24.888  17754.298 - 17873.455:   88.4637%  (        8)
00:10:24.888  17873.455 - 17992.611:   88.5198%  (        6)
00:10:24.888  17992.611 - 18111.767:   88.6040%  (        9)
00:10:24.888  18111.767 - 18230.924:   88.6882%  (        9)
00:10:24.888  18230.924 - 18350.080:   88.7631%  (        8)
00:10:24.888  18350.080 - 18469.236:   88.8754%  (       12)
00:10:24.888  18469.236 - 18588.393:   89.0064%  (       14)
00:10:24.888  18588.393 - 18707.549:   89.0999%  (       10)
00:10:24.888  18707.549 - 18826.705:   89.2216%  (       13)
00:10:24.888  18826.705 - 18945.862:   89.3432%  (       13)
00:10:24.888  18945.862 - 19065.018:   89.4648%  (       13)
00:10:24.888  19065.018 - 19184.175:   89.6052%  (       15)
00:10:24.888  19184.175 - 19303.331:   89.7549%  (       16)
00:10:24.888  19303.331 - 19422.487:   89.9326%  (       19)
00:10:24.888  19422.487 - 19541.644:   90.1010%  (       18)
00:10:24.888  19541.644 - 19660.800:   90.3069%  (       22)
00:10:24.888  19660.800 - 19779.956:   90.4659%  (       17)
00:10:24.888  19779.956 - 19899.113:   90.6624%  (       21)
00:10:24.888  19899.113 - 20018.269:   90.8776%  (       23)
00:10:24.888  20018.269 - 20137.425:   91.1022%  (       24)
00:10:24.888  20137.425 - 20256.582:   91.2987%  (       21)
00:10:24.888  20256.582 - 20375.738:   91.5981%  (       32)
00:10:24.888  20375.738 - 20494.895:   91.8600%  (       28)
00:10:24.888  20494.895 - 20614.051:   92.1594%  (       32)
00:10:24.888  20614.051 - 20733.207:   92.4401%  (       30)
00:10:24.888  20733.207 - 20852.364:   92.7115%  (       29)
00:10:24.888  20852.364 - 20971.520:   92.9641%  (       27)
00:10:24.888  20971.520 - 21090.676:   93.1980%  (       25)
00:10:24.888  21090.676 - 21209.833:   93.4693%  (       29)
00:10:24.888  21209.833 - 21328.989:   93.7406%  (       29)
00:10:24.888  21328.989 - 21448.145:   93.9558%  (       23)
00:10:24.888  21448.145 - 21567.302:   94.1804%  (       24)
00:10:24.888  21567.302 - 21686.458:   94.4237%  (       26)
00:10:24.888  21686.458 - 21805.615:   94.6576%  (       25)
00:10:24.888  21805.615 - 21924.771:   94.8728%  (       23)
00:10:24.888  21924.771 - 22043.927:   95.0318%  (       17)
00:10:24.888  22043.927 - 22163.084:   95.2751%  (       26)
00:10:24.888  22163.084 - 22282.240:   95.4716%  (       21)
00:10:24.888  22282.240 - 22401.396:   95.6774%  (       22)
00:10:24.888  22401.396 - 22520.553:   95.9019%  (       24)
00:10:24.888  22520.553 - 22639.709:   96.1078%  (       22)
00:10:24.888  22639.709 - 22758.865:   96.3230%  (       23)
00:10:24.888  22758.865 - 22878.022:   96.5288%  (       22)
00:10:24.888  22878.022 - 22997.178:   96.7253%  (       21)
00:10:24.888  22997.178 - 23116.335:   96.8937%  (       18)
00:10:24.888  23116.335 - 23235.491:   97.0808%  (       20)
00:10:24.888  23235.491 - 23354.647:   97.2399%  (       17)
00:10:24.888  23354.647 - 23473.804:   97.4270%  (       20)
00:10:24.888  23473.804 - 23592.960:   97.5861%  (       17)
00:10:24.888  23592.960 - 23712.116:   97.7451%  (       17)
00:10:24.888  23712.116 - 23831.273:   97.8574%  (       12)
00:10:24.888  23831.273 - 23950.429:   97.9603%  (       11)
00:10:24.888  23950.429 - 24069.585:   98.0726%  (       12)
00:10:24.888  24069.585 - 24188.742:   98.2036%  (       14)
00:10:24.888  24188.742 - 24307.898:   98.2972%  (       10)
00:10:24.888  24307.898 - 24427.055:   98.3907%  (       10)
00:10:24.888  24427.055 - 24546.211:   98.4562%  (        7)
00:10:24.888  24546.211 - 24665.367:   98.5124%  (        6)
00:10:24.888  24665.367 - 24784.524:   98.5498%  (        4)
00:10:24.888  24784.524 - 24903.680:   98.5966%  (        5)
00:10:24.888  24903.680 - 25022.836:   98.6527%  (        6)
00:10:24.888  25022.836 - 25141.993:   98.7088%  (        6)
00:10:24.888  25141.993 - 25261.149:   98.7743%  (        7)
00:10:24.888  25261.149 - 25380.305:   98.8024%  (        3)
00:10:24.888  37653.411 - 37891.724:   98.8118%  (        1)
00:10:24.888  37891.724 - 38130.036:   98.8398%  (        3)
00:10:24.888  38130.036 - 38368.349:   98.8772%  (        4)
00:10:24.888  38368.349 - 38606.662:   98.9147%  (        4)
00:10:24.889  38606.662 - 38844.975:   98.9427%  (        3)
00:10:24.889  38844.975 - 39083.287:   98.9802%  (        4)
00:10:24.889  39083.287 - 39321.600:   99.0176%  (        4)
00:10:24.889  39321.600 - 39559.913:   99.0457%  (        3)
00:10:24.889  39559.913 - 39798.225:   99.0831%  (        4)
00:10:24.889  39798.225 - 40036.538:   99.1018%  (        2)
00:10:24.889  40036.538 - 40274.851:   99.1392%  (        4)
00:10:24.889  40274.851 - 40513.164:   99.1766%  (        4)
00:10:24.889  40513.164 - 40751.476:   99.2047%  (        3)
00:10:24.889  40751.476 - 40989.789:   99.2515%  (        5)
00:10:24.889  40989.789 - 41228.102:   99.2796%  (        3)
00:10:24.889  41228.102 - 41466.415:   99.3170%  (        4)
00:10:24.889  41466.415 - 41704.727:   99.3451%  (        3)
00:10:24.889  41704.727 - 41943.040:   99.3825%  (        4)
00:10:24.889  41943.040 - 42181.353:   99.4012%  (        2)
00:10:24.889  47900.858 - 48139.171:   99.4293%  (        3)
00:10:24.889  48139.171 - 48377.484:   99.4573%  (        3)
00:10:24.889  48377.484 - 48615.796:   99.4948%  (        4)
00:10:24.889  48615.796 - 48854.109:   99.5322%  (        4)
00:10:24.889  48854.109 - 49092.422:   99.5509%  (        2)
00:10:24.889  49092.422 - 49330.735:   99.5977%  (        5)
00:10:24.889  49330.735 - 49569.047:   99.6257%  (        3)
00:10:24.889  49569.047 - 49807.360:   99.6538%  (        3)
00:10:24.889  49807.360 - 50045.673:   99.6912%  (        4)
00:10:24.889  50045.673 - 50283.985:   99.7193%  (        3)
00:10:24.889  50283.985 - 50522.298:   99.7661%  (        5)
00:10:24.889  50522.298 - 50760.611:   99.8035%  (        4)
00:10:24.889  50760.611 - 50998.924:   99.8316%  (        3)
00:10:24.889  50998.924 - 51237.236:   99.8690%  (        4)
00:10:24.889  51237.236 - 51475.549:   99.8971%  (        3)
00:10:24.889  51475.549 - 51713.862:   99.9251%  (        3)
00:10:24.889  51713.862 - 51952.175:   99.9626%  (        4)
00:10:24.889  51952.175 - 52190.487:   99.9906%  (        3)
00:10:24.889  52190.487 - 52428.800:  100.0000%  (        1)
00:10:24.889  
00:10:24.889  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:24.889  ==============================================================================
00:10:24.889         Range in us     Cumulative    IO count
00:10:24.889   7298.327 -  7328.116:    0.0094%  (        1)
00:10:24.889   7328.116 -  7357.905:    0.0281%  (        2)
00:10:24.889   7357.905 -  7387.695:    0.0468%  (        2)
00:10:24.889   7387.695 -  7417.484:    0.0749%  (        3)
00:10:24.889   7417.484 -  7447.273:    0.1403%  (        7)
00:10:24.889   7447.273 -  7477.062:    0.1871%  (        5)
00:10:24.889   7477.062 -  7506.851:    0.2620%  (        8)
00:10:24.889   7506.851 -  7536.640:    0.3555%  (       10)
00:10:24.889   7536.640 -  7566.429:    0.4304%  (        8)
00:10:24.889   7566.429 -  7596.218:    0.5052%  (        8)
00:10:24.889   7596.218 -  7626.007:    0.5988%  (       10)
00:10:24.889   7626.007 -  7685.585:    0.8327%  (       25)
00:10:24.889   7685.585 -  7745.164:    1.1321%  (       32)
00:10:24.889   7745.164 -  7804.742:    1.4596%  (       35)
00:10:24.889   7804.742 -  7864.320:    1.8525%  (       42)
00:10:24.889   7864.320 -  7923.898:    2.2362%  (       41)
00:10:24.889   7923.898 -  7983.476:    2.5917%  (       38)
00:10:24.889   7983.476 -  8043.055:    2.9472%  (       38)
00:10:24.889   8043.055 -  8102.633:    3.3496%  (       43)
00:10:24.889   8102.633 -  8162.211:    3.7893%  (       47)
00:10:24.889   8162.211 -  8221.789:    4.3226%  (       57)
00:10:24.889   8221.789 -  8281.367:    4.8840%  (       60)
00:10:24.889   8281.367 -  8340.945:    5.5109%  (       67)
00:10:24.889   8340.945 -  8400.524:    6.1658%  (       70)
00:10:24.889   8400.524 -  8460.102:    7.1108%  (      101)
00:10:24.889   8460.102 -  8519.680:    8.1213%  (      108)
00:10:24.889   8519.680 -  8579.258:    9.1411%  (      109)
00:10:24.889   8579.258 -  8638.836:   10.1422%  (      107)
00:10:24.889   8638.836 -  8698.415:   11.1527%  (      108)
00:10:24.889   8698.415 -  8757.993:   12.1257%  (      104)
00:10:24.889   8757.993 -  8817.571:   13.2111%  (      116)
00:10:24.889   8817.571 -  8877.149:   14.4180%  (      129)
00:10:24.889   8877.149 -  8936.727:   15.5782%  (      124)
00:10:24.889   8936.727 -  8996.305:   16.9536%  (      147)
00:10:24.889   8996.305 -  9055.884:   18.3757%  (      152)
00:10:24.889   9055.884 -  9115.462:   19.8166%  (      154)
00:10:24.889   9115.462 -  9175.040:   21.3791%  (      167)
00:10:24.889   9175.040 -  9234.618:   22.9042%  (      163)
00:10:24.889   9234.618 -  9294.196:   24.4667%  (      167)
00:10:24.889   9294.196 -  9353.775:   26.1415%  (      179)
00:10:24.889   9353.775 -  9413.353:   27.7133%  (      168)
00:10:24.889   9413.353 -  9472.931:   29.2103%  (      160)
00:10:24.889   9472.931 -  9532.509:   30.7073%  (      160)
00:10:24.889   9532.509 -  9592.087:   32.1856%  (      158)
00:10:24.889   9592.087 -  9651.665:   33.6826%  (      160)
00:10:24.889   9651.665 -  9711.244:   35.1796%  (      160)
00:10:24.889   9711.244 -  9770.822:   36.5831%  (      150)
00:10:24.889   9770.822 -  9830.400:   38.1643%  (      169)
00:10:24.889   9830.400 -  9889.978:   39.8016%  (      175)
00:10:24.889   9889.978 -  9949.556:   41.5326%  (      185)
00:10:24.889   9949.556 - 10009.135:   43.0951%  (      167)
00:10:24.889  10009.135 - 10068.713:   44.5453%  (      155)
00:10:24.889  10068.713 - 10128.291:   45.9768%  (      153)
00:10:24.889  10128.291 - 10187.869:   47.3990%  (      152)
00:10:24.889  10187.869 - 10247.447:   48.8772%  (      158)
00:10:24.889  10247.447 - 10307.025:   50.3555%  (      158)
00:10:24.889  10307.025 - 10366.604:   51.7871%  (      153)
00:10:24.889  10366.604 - 10426.182:   53.1718%  (      148)
00:10:24.889  10426.182 - 10485.760:   54.5191%  (      144)
00:10:24.889  10485.760 - 10545.338:   55.7260%  (      129)
00:10:24.889  10545.338 - 10604.916:   56.9704%  (      133)
00:10:24.889  10604.916 - 10664.495:   58.2242%  (      134)
00:10:24.889  10664.495 - 10724.073:   59.4405%  (      130)
00:10:24.889  10724.073 - 10783.651:   60.6568%  (      130)
00:10:24.889  10783.651 - 10843.229:   61.8825%  (      131)
00:10:24.889  10843.229 - 10902.807:   63.0146%  (      121)
00:10:24.889  10902.807 - 10962.385:   64.0438%  (      110)
00:10:24.889  10962.385 - 11021.964:   65.0168%  (      104)
00:10:24.889  11021.964 - 11081.542:   65.8963%  (       94)
00:10:24.889  11081.542 - 11141.120:   66.8413%  (      101)
00:10:24.889  11141.120 - 11200.698:   67.6366%  (       85)
00:10:24.889  11200.698 - 11260.276:   68.3009%  (       71)
00:10:24.889  11260.276 - 11319.855:   68.8623%  (       60)
00:10:24.889  11319.855 - 11379.433:   69.3394%  (       51)
00:10:24.889  11379.433 - 11439.011:   69.7979%  (       49)
00:10:24.889  11439.011 - 11498.589:   70.2470%  (       48)
00:10:24.889  11498.589 - 11558.167:   70.6400%  (       42)
00:10:24.889  11558.167 - 11617.745:   71.0797%  (       47)
00:10:24.889  11617.745 - 11677.324:   71.4446%  (       39)
00:10:24.889  11677.324 - 11736.902:   71.8189%  (       40)
00:10:24.889  11736.902 - 11796.480:   72.1931%  (       40)
00:10:24.889  11796.480 - 11856.058:   72.5861%  (       42)
00:10:24.889  11856.058 - 11915.636:   72.9510%  (       39)
00:10:24.889  11915.636 - 11975.215:   73.2878%  (       36)
00:10:24.889  11975.215 - 12034.793:   73.6340%  (       37)
00:10:24.889  12034.793 - 12094.371:   74.0082%  (       40)
00:10:24.889  12094.371 - 12153.949:   74.3263%  (       34)
00:10:24.889  12153.949 - 12213.527:   74.6725%  (       37)
00:10:24.889  12213.527 - 12273.105:   75.0561%  (       41)
00:10:24.889  12273.105 - 12332.684:   75.4210%  (       39)
00:10:24.889  12332.684 - 12392.262:   75.7766%  (       38)
00:10:24.889  12392.262 - 12451.840:   76.1228%  (       37)
00:10:24.889  12451.840 - 12511.418:   76.4502%  (       35)
00:10:24.889  12511.418 - 12570.996:   76.7683%  (       34)
00:10:24.889  12570.996 - 12630.575:   77.0303%  (       28)
00:10:24.889  12630.575 - 12690.153:   77.2923%  (       28)
00:10:24.889  12690.153 - 12749.731:   77.5075%  (       23)
00:10:24.889  12749.731 - 12809.309:   77.7695%  (       28)
00:10:24.889  12809.309 - 12868.887:   78.0969%  (       35)
00:10:24.889  12868.887 - 12928.465:   78.3963%  (       32)
00:10:24.889  12928.465 - 12988.044:   78.6864%  (       31)
00:10:24.889  12988.044 - 13047.622:   79.0138%  (       35)
00:10:24.889  13047.622 - 13107.200:   79.3507%  (       36)
00:10:24.889  13107.200 - 13166.778:   79.6688%  (       34)
00:10:24.889  13166.778 - 13226.356:   79.9869%  (       34)
00:10:24.889  13226.356 - 13285.935:   80.2582%  (       29)
00:10:24.889  13285.935 - 13345.513:   80.5109%  (       27)
00:10:24.889  13345.513 - 13405.091:   80.7541%  (       26)
00:10:24.889  13405.091 - 13464.669:   80.9319%  (       19)
00:10:24.889  13464.669 - 13524.247:   81.1564%  (       24)
00:10:24.889  13524.247 - 13583.825:   81.4091%  (       27)
00:10:24.889  13583.825 - 13643.404:   81.6617%  (       27)
00:10:24.889  13643.404 - 13702.982:   81.9049%  (       26)
00:10:24.889  13702.982 - 13762.560:   82.1576%  (       27)
00:10:24.889  13762.560 - 13822.138:   82.3915%  (       25)
00:10:24.889  13822.138 - 13881.716:   82.6347%  (       26)
00:10:24.889  13881.716 - 13941.295:   82.8219%  (       20)
00:10:24.889  13941.295 - 14000.873:   82.9903%  (       18)
00:10:24.889  14000.873 - 14060.451:   83.1493%  (       17)
00:10:24.889  14060.451 - 14120.029:   83.3458%  (       21)
00:10:24.889  14120.029 - 14179.607:   83.5236%  (       19)
00:10:24.889  14179.607 - 14239.185:   83.6920%  (       18)
00:10:24.889  14239.185 - 14298.764:   83.8604%  (       18)
00:10:24.889  14298.764 - 14358.342:   84.0195%  (       17)
00:10:24.889  14358.342 - 14417.920:   84.1692%  (       16)
00:10:24.889  14417.920 - 14477.498:   84.3189%  (       16)
00:10:24.889  14477.498 - 14537.076:   84.4873%  (       18)
00:10:24.889  14537.076 - 14596.655:   84.6650%  (       19)
00:10:24.889  14596.655 - 14656.233:   84.8147%  (       16)
00:10:24.889  14656.233 - 14715.811:   84.9644%  (       16)
00:10:24.889  14715.811 - 14775.389:   85.1141%  (       16)
00:10:24.889  14775.389 - 14834.967:   85.2638%  (       16)
00:10:24.889  14834.967 - 14894.545:   85.4042%  (       15)
00:10:24.889  14894.545 - 14954.124:   85.5632%  (       17)
00:10:24.889  14954.124 - 15013.702:   85.7223%  (       17)
00:10:24.889  15013.702 - 15073.280:   85.8533%  (       14)
00:10:24.889  15073.280 - 15132.858:   86.0311%  (       19)
00:10:24.889  15132.858 - 15192.436:   86.1714%  (       15)
00:10:24.889  15192.436 - 15252.015:   86.3024%  (       14)
00:10:24.889  15252.015 - 15371.171:   86.5457%  (       26)
00:10:24.889  15371.171 - 15490.327:   86.7421%  (       21)
00:10:24.889  15490.327 - 15609.484:   86.9667%  (       24)
00:10:24.889  15609.484 - 15728.640:   87.1912%  (       24)
00:10:24.889  15728.640 - 15847.796:   87.3597%  (       18)
00:10:24.889  15847.796 - 15966.953:   87.4251%  (        7)
00:10:24.889  15966.953 - 16086.109:   87.4906%  (        7)
00:10:24.889  16086.109 - 16205.265:   87.5374%  (        5)
00:10:24.889  16205.265 - 16324.422:   87.5561%  (        2)
00:10:24.889  16324.422 - 16443.578:   87.5842%  (        3)
00:10:24.889  16443.578 - 16562.735:   87.6403%  (        6)
00:10:24.889  16562.735 - 16681.891:   87.6965%  (        6)
00:10:24.890  16681.891 - 16801.047:   87.7526%  (        6)
00:10:24.890  16801.047 - 16920.204:   87.8181%  (        7)
00:10:24.890  16920.204 - 17039.360:   87.8930%  (        8)
00:10:24.890  17039.360 - 17158.516:   87.9491%  (        6)
00:10:24.890  17158.516 - 17277.673:   88.0146%  (        7)
00:10:24.890  17277.673 - 17396.829:   88.0988%  (        9)
00:10:24.890  17396.829 - 17515.985:   88.1924%  (       10)
00:10:24.890  17515.985 - 17635.142:   88.2953%  (       11)
00:10:24.890  17635.142 - 17754.298:   88.3795%  (        9)
00:10:24.890  17754.298 - 17873.455:   88.4637%  (        9)
00:10:24.890  17873.455 - 17992.611:   88.5573%  (       10)
00:10:24.890  17992.611 - 18111.767:   88.6508%  (       10)
00:10:24.890  18111.767 - 18230.924:   88.7350%  (        9)
00:10:24.890  18230.924 - 18350.080:   88.8099%  (        8)
00:10:24.890  18350.080 - 18469.236:   88.8941%  (        9)
00:10:24.890  18469.236 - 18588.393:   88.9876%  (       10)
00:10:24.890  18588.393 - 18707.549:   89.0719%  (        9)
00:10:24.890  18707.549 - 18826.705:   89.1561%  (        9)
00:10:24.890  18826.705 - 18945.862:   89.2590%  (       11)
00:10:24.890  18945.862 - 19065.018:   89.3432%  (        9)
00:10:24.890  19065.018 - 19184.175:   89.4368%  (       10)
00:10:24.890  19184.175 - 19303.331:   89.5677%  (       14)
00:10:24.890  19303.331 - 19422.487:   89.7174%  (       16)
00:10:24.890  19422.487 - 19541.644:   89.8578%  (       15)
00:10:24.890  19541.644 - 19660.800:   90.0449%  (       20)
00:10:24.890  19660.800 - 19779.956:   90.1853%  (       15)
00:10:24.890  19779.956 - 19899.113:   90.3911%  (       22)
00:10:24.890  19899.113 - 20018.269:   90.5595%  (       18)
00:10:24.890  20018.269 - 20137.425:   90.7653%  (       22)
00:10:24.890  20137.425 - 20256.582:   91.0180%  (       27)
00:10:24.890  20256.582 - 20375.738:   91.2799%  (       28)
00:10:24.890  20375.738 - 20494.895:   91.5606%  (       30)
00:10:24.890  20494.895 - 20614.051:   91.7945%  (       25)
00:10:24.890  20614.051 - 20733.207:   92.1033%  (       33)
00:10:24.890  20733.207 - 20852.364:   92.4401%  (       36)
00:10:24.890  20852.364 - 20971.520:   92.7769%  (       36)
00:10:24.890  20971.520 - 21090.676:   93.1138%  (       36)
00:10:24.890  21090.676 - 21209.833:   93.4506%  (       36)
00:10:24.890  21209.833 - 21328.989:   93.7687%  (       34)
00:10:24.890  21328.989 - 21448.145:   94.0400%  (       29)
00:10:24.890  21448.145 - 21567.302:   94.3301%  (       31)
00:10:24.890  21567.302 - 21686.458:   94.5640%  (       25)
00:10:24.890  21686.458 - 21805.615:   94.8447%  (       30)
00:10:24.890  21805.615 - 21924.771:   95.1254%  (       30)
00:10:24.890  21924.771 - 22043.927:   95.3874%  (       28)
00:10:24.890  22043.927 - 22163.084:   95.5932%  (       22)
00:10:24.890  22163.084 - 22282.240:   95.8177%  (       24)
00:10:24.890  22282.240 - 22401.396:   96.0423%  (       24)
00:10:24.890  22401.396 - 22520.553:   96.2762%  (       25)
00:10:24.890  22520.553 - 22639.709:   96.4727%  (       21)
00:10:24.890  22639.709 - 22758.865:   96.6785%  (       22)
00:10:24.890  22758.865 - 22878.022:   96.8469%  (       18)
00:10:24.890  22878.022 - 22997.178:   97.0153%  (       18)
00:10:24.890  22997.178 - 23116.335:   97.2305%  (       23)
00:10:24.890  23116.335 - 23235.491:   97.4457%  (       23)
00:10:24.890  23235.491 - 23354.647:   97.6235%  (       19)
00:10:24.890  23354.647 - 23473.804:   97.7826%  (       17)
00:10:24.890  23473.804 - 23592.960:   97.8761%  (       10)
00:10:24.890  23592.960 - 23712.116:   98.0071%  (       14)
00:10:24.890  23712.116 - 23831.273:   98.1287%  (       13)
00:10:24.890  23831.273 - 23950.429:   98.2223%  (       10)
00:10:24.890  23950.429 - 24069.585:   98.2972%  (        8)
00:10:24.890  24069.585 - 24188.742:   98.3626%  (        7)
00:10:24.890  24188.742 - 24307.898:   98.3907%  (        3)
00:10:24.890  24307.898 - 24427.055:   98.4281%  (        4)
00:10:24.890  24427.055 - 24546.211:   98.4562%  (        3)
00:10:24.890  24546.211 - 24665.367:   98.4843%  (        3)
00:10:24.890  24665.367 - 24784.524:   98.5124%  (        3)
00:10:24.890  24784.524 - 24903.680:   98.5404%  (        3)
00:10:24.890  24903.680 - 25022.836:   98.5778%  (        4)
00:10:24.890  25022.836 - 25141.993:   98.6153%  (        4)
00:10:24.890  25141.993 - 25261.149:   98.6714%  (        6)
00:10:24.890  25261.149 - 25380.305:   98.7369%  (        7)
00:10:24.890  25380.305 - 25499.462:   98.8024%  (        7)
00:10:24.890  33363.782 - 33602.095:   98.8211%  (        2)
00:10:24.890  33602.095 - 33840.407:   98.8585%  (        4)
00:10:24.890  33840.407 - 34078.720:   98.8866%  (        3)
00:10:24.890  34078.720 - 34317.033:   98.9240%  (        4)
00:10:24.890  34317.033 - 34555.345:   98.9615%  (        4)
00:10:24.890  34555.345 - 34793.658:   98.9989%  (        4)
00:10:24.890  34793.658 - 35031.971:   99.0363%  (        4)
00:10:24.890  35031.971 - 35270.284:   99.0737%  (        4)
00:10:24.890  35270.284 - 35508.596:   99.1112%  (        4)
00:10:24.890  35508.596 - 35746.909:   99.1486%  (        4)
00:10:24.890  35746.909 - 35985.222:   99.1860%  (        4)
00:10:24.890  35985.222 - 36223.535:   99.2234%  (        4)
00:10:24.890  36223.535 - 36461.847:   99.2609%  (        4)
00:10:24.890  36461.847 - 36700.160:   99.2983%  (        4)
00:10:24.890  36700.160 - 36938.473:   99.3357%  (        4)
00:10:24.890  36938.473 - 37176.785:   99.3731%  (        4)
00:10:24.890  37176.785 - 37415.098:   99.4012%  (        3)
00:10:24.890  43372.916 - 43611.229:   99.4293%  (        3)
00:10:24.890  43611.229 - 43849.542:   99.4760%  (        5)
00:10:24.890  43849.542 - 44087.855:   99.5041%  (        3)
00:10:24.890  44087.855 - 44326.167:   99.5322%  (        3)
00:10:24.890  44326.167 - 44564.480:   99.5696%  (        4)
00:10:24.890  44564.480 - 44802.793:   99.6070%  (        4)
00:10:24.890  44802.793 - 45041.105:   99.6445%  (        4)
00:10:24.890  45041.105 - 45279.418:   99.6819%  (        4)
00:10:24.890  45279.418 - 45517.731:   99.7193%  (        4)
00:10:24.890  45517.731 - 45756.044:   99.7474%  (        3)
00:10:24.890  45756.044 - 45994.356:   99.7942%  (        5)
00:10:24.890  45994.356 - 46232.669:   99.8222%  (        3)
00:10:24.890  46232.669 - 46470.982:   99.8690%  (        5)
00:10:24.890  46470.982 - 46709.295:   99.9064%  (        4)
00:10:24.890  46709.295 - 46947.607:   99.9439%  (        4)
00:10:24.890  46947.607 - 47185.920:   99.9813%  (        4)
00:10:24.890  47185.920 - 47424.233:  100.0000%  (        2)
00:10:24.890  
00:10:24.890  Latency histogram for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:24.890  ==============================================================================
00:10:24.890         Range in us     Cumulative    IO count
00:10:24.890   7298.327 -  7328.116:    0.0094%  (        1)
00:10:24.890   7328.116 -  7357.905:    0.0281%  (        2)
00:10:24.890   7357.905 -  7387.695:    0.0468%  (        2)
00:10:24.890   7387.695 -  7417.484:    0.0655%  (        2)
00:10:24.890   7417.484 -  7447.273:    0.1123%  (        5)
00:10:24.890   7447.273 -  7477.062:    0.1871%  (        8)
00:10:24.890   7477.062 -  7506.851:    0.2807%  (       10)
00:10:24.890   7506.851 -  7536.640:    0.3743%  (       10)
00:10:24.890   7536.640 -  7566.429:    0.4772%  (       11)
00:10:24.890   7566.429 -  7596.218:    0.5707%  (       10)
00:10:24.890   7596.218 -  7626.007:    0.7204%  (       16)
00:10:24.890   7626.007 -  7685.585:    0.9824%  (       28)
00:10:24.890   7685.585 -  7745.164:    1.2818%  (       32)
00:10:24.890   7745.164 -  7804.742:    1.6093%  (       35)
00:10:24.890   7804.742 -  7864.320:    1.9461%  (       36)
00:10:24.890   7864.320 -  7923.898:    2.2829%  (       36)
00:10:24.890   7923.898 -  7983.476:    2.6198%  (       36)
00:10:24.890   7983.476 -  8043.055:    2.9940%  (       40)
00:10:24.890   8043.055 -  8102.633:    3.3963%  (       43)
00:10:24.890   8102.633 -  8162.211:    3.8922%  (       53)
00:10:24.890   8162.211 -  8221.789:    4.4349%  (       58)
00:10:24.890   8221.789 -  8281.367:    5.0524%  (       66)
00:10:24.890   8281.367 -  8340.945:    5.6418%  (       63)
00:10:24.890   8340.945 -  8400.524:    6.4371%  (       85)
00:10:24.890   8400.524 -  8460.102:    7.2885%  (       91)
00:10:24.890   8460.102 -  8519.680:    8.2148%  (       99)
00:10:24.890   8519.680 -  8579.258:    9.2253%  (      108)
00:10:24.890   8579.258 -  8638.836:   10.3293%  (      118)
00:10:24.890   8638.836 -  8698.415:   11.3585%  (      110)
00:10:24.890   8698.415 -  8757.993:   12.4064%  (      112)
00:10:24.890   8757.993 -  8817.571:   13.5198%  (      119)
00:10:24.890   8817.571 -  8877.149:   14.6052%  (      116)
00:10:24.890   8877.149 -  8936.727:   15.7934%  (      127)
00:10:24.890   8936.727 -  8996.305:   17.0752%  (      137)
00:10:24.890   8996.305 -  9055.884:   18.4225%  (      144)
00:10:24.890   9055.884 -  9115.462:   19.9195%  (      160)
00:10:24.890   9115.462 -  9175.040:   21.3698%  (      155)
00:10:24.890   9175.040 -  9234.618:   22.8574%  (      159)
00:10:24.890   9234.618 -  9294.196:   24.3451%  (      159)
00:10:24.890   9294.196 -  9353.775:   25.8795%  (      164)
00:10:24.890   9353.775 -  9413.353:   27.4981%  (      173)
00:10:24.890   9413.353 -  9472.931:   29.1448%  (      176)
00:10:24.890   9472.931 -  9532.509:   30.7915%  (      176)
00:10:24.890   9532.509 -  9592.087:   32.2792%  (      159)
00:10:24.890   9592.087 -  9651.665:   33.8417%  (      167)
00:10:24.890   9651.665 -  9711.244:   35.3761%  (      164)
00:10:24.890   9711.244 -  9770.822:   36.9199%  (      165)
00:10:24.890   9770.822 -  9830.400:   38.5292%  (      172)
00:10:24.890   9830.400 -  9889.978:   40.0823%  (      166)
00:10:24.890   9889.978 -  9949.556:   41.6261%  (      165)
00:10:24.890   9949.556 - 10009.135:   43.2260%  (      171)
00:10:24.890  10009.135 - 10068.713:   44.7511%  (      163)
00:10:24.890  10068.713 - 10128.291:   46.1546%  (      150)
00:10:24.890  10128.291 - 10187.869:   47.5954%  (      154)
00:10:24.890  10187.869 - 10247.447:   48.9708%  (      147)
00:10:24.890  10247.447 - 10307.025:   50.4865%  (      162)
00:10:24.890  10307.025 - 10366.604:   51.9180%  (      153)
00:10:24.890  10366.604 - 10426.182:   53.3683%  (      155)
00:10:24.890  10426.182 - 10485.760:   54.8372%  (      157)
00:10:24.890  10485.760 - 10545.338:   56.2032%  (      146)
00:10:24.890  10545.338 - 10604.916:   57.5037%  (      139)
00:10:24.890  10604.916 - 10664.495:   58.7575%  (      134)
00:10:24.890  10664.495 - 10724.073:   59.9551%  (      128)
00:10:24.890  10724.073 - 10783.651:   61.1246%  (      125)
00:10:24.890  10783.651 - 10843.229:   62.2193%  (      117)
00:10:24.890  10843.229 - 10902.807:   63.2204%  (      107)
00:10:24.890  10902.807 - 10962.385:   64.2590%  (      111)
00:10:24.890  10962.385 - 11021.964:   65.2133%  (      102)
00:10:24.890  11021.964 - 11081.542:   66.0554%  (       90)
00:10:24.890  11081.542 - 11141.120:   66.7852%  (       78)
00:10:24.890  11141.120 - 11200.698:   67.4401%  (       70)
00:10:24.890  11200.698 - 11260.276:   68.1325%  (       74)
00:10:24.890  11260.276 - 11319.855:   68.7781%  (       69)
00:10:24.890  11319.855 - 11379.433:   69.4049%  (       67)
00:10:24.890  11379.433 - 11439.011:   69.9289%  (       56)
00:10:24.891  11439.011 - 11498.589:   70.3780%  (       48)
00:10:24.891  11498.589 - 11558.167:   70.8832%  (       54)
00:10:24.891  11558.167 - 11617.745:   71.3604%  (       51)
00:10:24.891  11617.745 - 11677.324:   71.8095%  (       48)
00:10:24.891  11677.324 - 11736.902:   72.2399%  (       46)
00:10:24.891  11736.902 - 11796.480:   72.6984%  (       49)
00:10:24.891  11796.480 - 11856.058:   73.1662%  (       50)
00:10:24.891  11856.058 - 11915.636:   73.5778%  (       44)
00:10:24.891  11915.636 - 11975.215:   73.9147%  (       36)
00:10:24.891  11975.215 - 12034.793:   74.2796%  (       39)
00:10:24.891  12034.793 - 12094.371:   74.5696%  (       31)
00:10:24.891  12094.371 - 12153.949:   74.8503%  (       30)
00:10:24.891  12153.949 - 12213.527:   75.1684%  (       34)
00:10:24.891  12213.527 - 12273.105:   75.5146%  (       37)
00:10:24.891  12273.105 - 12332.684:   75.8421%  (       35)
00:10:24.891  12332.684 - 12392.262:   76.1228%  (       30)
00:10:24.891  12392.262 - 12451.840:   76.3567%  (       25)
00:10:24.891  12451.840 - 12511.418:   76.6186%  (       28)
00:10:24.891  12511.418 - 12570.996:   76.9274%  (       33)
00:10:24.891  12570.996 - 12630.575:   77.1800%  (       27)
00:10:24.891  12630.575 - 12690.153:   77.4513%  (       29)
00:10:24.891  12690.153 - 12749.731:   77.7695%  (       34)
00:10:24.891  12749.731 - 12809.309:   78.1531%  (       41)
00:10:24.891  12809.309 - 12868.887:   78.5180%  (       39)
00:10:24.891  12868.887 - 12928.465:   78.8361%  (       34)
00:10:24.891  12928.465 - 12988.044:   79.1448%  (       33)
00:10:24.891  12988.044 - 13047.622:   79.4629%  (       34)
00:10:24.891  13047.622 - 13107.200:   79.7998%  (       36)
00:10:24.891  13107.200 - 13166.778:   80.0898%  (       31)
00:10:24.891  13166.778 - 13226.356:   80.4079%  (       34)
00:10:24.891  13226.356 - 13285.935:   80.6793%  (       29)
00:10:24.891  13285.935 - 13345.513:   80.9693%  (       31)
00:10:24.891  13345.513 - 13405.091:   81.2594%  (       31)
00:10:24.891  13405.091 - 13464.669:   81.5120%  (       27)
00:10:24.891  13464.669 - 13524.247:   81.7833%  (       29)
00:10:24.891  13524.247 - 13583.825:   82.0172%  (       25)
00:10:24.891  13583.825 - 13643.404:   82.2418%  (       24)
00:10:24.891  13643.404 - 13702.982:   82.5131%  (       29)
00:10:24.891  13702.982 - 13762.560:   82.7376%  (       24)
00:10:24.891  13762.560 - 13822.138:   82.8593%  (       13)
00:10:24.891  13822.138 - 13881.716:   82.9528%  (       10)
00:10:24.891  13881.716 - 13941.295:   83.0464%  (       10)
00:10:24.891  13941.295 - 14000.873:   83.1119%  (        7)
00:10:24.891  14000.873 - 14060.451:   83.1680%  (        6)
00:10:24.891  14060.451 - 14120.029:   83.2429%  (        8)
00:10:24.891  14120.029 - 14179.607:   83.3271%  (        9)
00:10:24.891  14179.607 - 14239.185:   83.4207%  (       10)
00:10:24.891  14239.185 - 14298.764:   83.5236%  (       11)
00:10:24.891  14298.764 - 14358.342:   83.6359%  (       12)
00:10:24.891  14358.342 - 14417.920:   83.7201%  (        9)
00:10:24.891  14417.920 - 14477.498:   83.7949%  (        8)
00:10:24.891  14477.498 - 14537.076:   83.8978%  (       11)
00:10:24.891  14537.076 - 14596.655:   83.9914%  (       10)
00:10:24.891  14596.655 - 14656.233:   84.1130%  (       13)
00:10:24.891  14656.233 - 14715.811:   84.2066%  (       10)
00:10:24.891  14715.811 - 14775.389:   84.2908%  (        9)
00:10:24.891  14775.389 - 14834.967:   84.3750%  (        9)
00:10:24.891  14834.967 - 14894.545:   84.4686%  (       10)
00:10:24.891  14894.545 - 14954.124:   84.5621%  (       10)
00:10:24.891  14954.124 - 15013.702:   84.6838%  (       13)
00:10:24.891  15013.702 - 15073.280:   84.7867%  (       11)
00:10:24.891  15073.280 - 15132.858:   84.8990%  (       12)
00:10:24.891  15132.858 - 15192.436:   84.9925%  (       10)
00:10:24.891  15192.436 - 15252.015:   85.1048%  (       12)
00:10:24.891  15252.015 - 15371.171:   85.2732%  (       18)
00:10:24.891  15371.171 - 15490.327:   85.4135%  (       15)
00:10:24.891  15490.327 - 15609.484:   85.5632%  (       16)
00:10:24.891  15609.484 - 15728.640:   85.7691%  (       22)
00:10:24.891  15728.640 - 15847.796:   85.9562%  (       20)
00:10:24.891  15847.796 - 15966.953:   86.1246%  (       18)
00:10:24.891  15966.953 - 16086.109:   86.3118%  (       20)
00:10:24.891  16086.109 - 16205.265:   86.5082%  (       21)
00:10:24.891  16205.265 - 16324.422:   86.7328%  (       24)
00:10:24.891  16324.422 - 16443.578:   86.9106%  (       19)
00:10:24.891  16443.578 - 16562.735:   87.1257%  (       23)
00:10:24.891  16562.735 - 16681.891:   87.3222%  (       21)
00:10:24.891  16681.891 - 16801.047:   87.5187%  (       21)
00:10:24.891  16801.047 - 16920.204:   87.7058%  (       20)
00:10:24.891  16920.204 - 17039.360:   87.9210%  (       23)
00:10:24.891  17039.360 - 17158.516:   88.1362%  (       23)
00:10:24.891  17158.516 - 17277.673:   88.3421%  (       22)
00:10:24.891  17277.673 - 17396.829:   88.4918%  (       16)
00:10:24.891  17396.829 - 17515.985:   88.6228%  (       14)
00:10:24.891  17515.985 - 17635.142:   88.7444%  (       13)
00:10:24.891  17635.142 - 17754.298:   88.8473%  (       11)
00:10:24.891  17754.298 - 17873.455:   88.9409%  (       10)
00:10:24.891  17873.455 - 17992.611:   89.0344%  (       10)
00:10:24.891  17992.611 - 18111.767:   89.1186%  (        9)
00:10:24.891  18111.767 - 18230.924:   89.2028%  (        9)
00:10:24.891  18230.924 - 18350.080:   89.2871%  (        9)
00:10:24.891  18350.080 - 18469.236:   89.3713%  (        9)
00:10:24.891  18469.236 - 18588.393:   89.4555%  (        9)
00:10:24.891  18588.393 - 18707.549:   89.5303%  (        8)
00:10:24.891  18707.549 - 18826.705:   89.6426%  (       12)
00:10:24.891  18826.705 - 18945.862:   89.7455%  (       11)
00:10:24.891  18945.862 - 19065.018:   89.8578%  (       12)
00:10:24.891  19065.018 - 19184.175:   89.9607%  (       11)
00:10:24.891  19184.175 - 19303.331:   90.0636%  (       11)
00:10:24.891  19303.331 - 19422.487:   90.1478%  (        9)
00:10:24.891  19422.487 - 19541.644:   90.2320%  (        9)
00:10:24.891  19541.644 - 19660.800:   90.3069%  (        8)
00:10:24.891  19660.800 - 19779.956:   90.4004%  (       10)
00:10:24.891  19779.956 - 19899.113:   90.5034%  (       11)
00:10:24.891  19899.113 - 20018.269:   90.6344%  (       14)
00:10:24.891  20018.269 - 20137.425:   90.8402%  (       22)
00:10:24.891  20137.425 - 20256.582:   91.0554%  (       23)
00:10:24.891  20256.582 - 20375.738:   91.3080%  (       27)
00:10:24.891  20375.738 - 20494.895:   91.5606%  (       27)
00:10:24.891  20494.895 - 20614.051:   91.8132%  (       27)
00:10:24.891  20614.051 - 20733.207:   92.1501%  (       36)
00:10:24.891  20733.207 - 20852.364:   92.4588%  (       33)
00:10:24.891  20852.364 - 20971.520:   92.7863%  (       35)
00:10:24.891  20971.520 - 21090.676:   93.1138%  (       35)
00:10:24.891  21090.676 - 21209.833:   93.3570%  (       26)
00:10:24.891  21209.833 - 21328.989:   93.6471%  (       31)
00:10:24.891  21328.989 - 21448.145:   93.8810%  (       25)
00:10:24.891  21448.145 - 21567.302:   94.1523%  (       29)
00:10:24.891  21567.302 - 21686.458:   94.3956%  (       26)
00:10:24.891  21686.458 - 21805.615:   94.6669%  (       29)
00:10:24.891  21805.615 - 21924.771:   94.9195%  (       27)
00:10:24.891  21924.771 - 22043.927:   95.1722%  (       27)
00:10:24.891  22043.927 - 22163.084:   95.4154%  (       26)
00:10:24.891  22163.084 - 22282.240:   95.6680%  (       27)
00:10:24.891  22282.240 - 22401.396:   95.9581%  (       31)
00:10:24.891  22401.396 - 22520.553:   96.2201%  (       28)
00:10:24.891  22520.553 - 22639.709:   96.4727%  (       27)
00:10:24.891  22639.709 - 22758.865:   96.7347%  (       28)
00:10:24.891  22758.865 - 22878.022:   96.9405%  (       22)
00:10:24.891  22878.022 - 22997.178:   97.1838%  (       26)
00:10:24.891  22997.178 - 23116.335:   97.3709%  (       20)
00:10:24.891  23116.335 - 23235.491:   97.5299%  (       17)
00:10:24.891  23235.491 - 23354.647:   97.7171%  (       20)
00:10:24.891  23354.647 - 23473.804:   97.8481%  (       14)
00:10:24.891  23473.804 - 23592.960:   97.9603%  (       12)
00:10:24.891  23592.960 - 23712.116:   98.0913%  (       14)
00:10:24.891  23712.116 - 23831.273:   98.1755%  (        9)
00:10:24.891  23831.273 - 23950.429:   98.2784%  (       11)
00:10:24.891  23950.429 - 24069.585:   98.3533%  (        8)
00:10:24.891  24069.585 - 24188.742:   98.4001%  (        5)
00:10:24.891  24188.742 - 24307.898:   98.4281%  (        3)
00:10:24.891  24307.898 - 24427.055:   98.4562%  (        3)
00:10:24.891  24427.055 - 24546.211:   98.4843%  (        3)
00:10:24.891  24546.211 - 24665.367:   98.5124%  (        3)
00:10:24.891  24665.367 - 24784.524:   98.5404%  (        3)
00:10:24.891  24784.524 - 24903.680:   98.5685%  (        3)
00:10:24.891  24903.680 - 25022.836:   98.5966%  (        3)
00:10:24.891  25022.836 - 25141.993:   98.6340%  (        4)
00:10:24.891  25141.993 - 25261.149:   98.6621%  (        3)
00:10:24.891  25261.149 - 25380.305:   98.6995%  (        4)
00:10:24.891  25380.305 - 25499.462:   98.7275%  (        3)
00:10:24.891  25499.462 - 25618.618:   98.7556%  (        3)
00:10:24.891  25618.618 - 25737.775:   98.7837%  (        3)
00:10:24.891  25737.775 - 25856.931:   98.8024%  (        2)
00:10:24.891  28835.840 - 28954.996:   98.8211%  (        2)
00:10:24.891  28954.996 - 29074.153:   98.8398%  (        2)
00:10:24.891  29074.153 - 29193.309:   98.8585%  (        2)
00:10:24.891  29193.309 - 29312.465:   98.8772%  (        2)
00:10:24.891  29312.465 - 29431.622:   98.8960%  (        2)
00:10:24.891  29431.622 - 29550.778:   98.9147%  (        2)
00:10:24.891  29550.778 - 29669.935:   98.9240%  (        1)
00:10:24.891  29669.935 - 29789.091:   98.9427%  (        2)
00:10:24.891  29789.091 - 29908.247:   98.9615%  (        2)
00:10:24.892  29908.247 - 30027.404:   98.9802%  (        2)
00:10:24.892  30027.404 - 30146.560:   98.9989%  (        2)
00:10:24.892  30146.560 - 30265.716:   99.0176%  (        2)
00:10:24.892  30265.716 - 30384.873:   99.0363%  (        2)
00:10:24.892  30384.873 - 30504.029:   99.0550%  (        2)
00:10:24.892  30504.029 - 30742.342:   99.0831%  (        3)
00:10:24.892  30742.342 - 30980.655:   99.1205%  (        4)
00:10:24.892  30980.655 - 31218.967:   99.1579%  (        4)
00:10:24.892  31218.967 - 31457.280:   99.1954%  (        4)
00:10:24.892  31457.280 - 31695.593:   99.2234%  (        3)
00:10:24.892  31695.593 - 31933.905:   99.2609%  (        4)
00:10:24.892  31933.905 - 32172.218:   99.2983%  (        4)
00:10:24.892  32172.218 - 32410.531:   99.3263%  (        3)
00:10:24.892  32410.531 - 32648.844:   99.3638%  (        4)
00:10:24.892  32648.844 - 32887.156:   99.4012%  (        4)
00:10:24.892  38844.975 - 39083.287:   99.4386%  (        4)
00:10:24.892  39083.287 - 39321.600:   99.4667%  (        3)
00:10:24.892  39321.600 - 39559.913:   99.5135%  (        5)
00:10:24.892  39559.913 - 39798.225:   99.5509%  (        4)
00:10:24.892  39798.225 - 40036.538:   99.5883%  (        4)
00:10:24.892  40036.538 - 40274.851:   99.6351%  (        5)
00:10:24.892  40274.851 - 40513.164:   99.6725%  (        4)
00:10:24.892  40513.164 - 40751.476:   99.7100%  (        4)
00:10:24.892  40751.476 - 40989.789:   99.7474%  (        4)
00:10:24.892  40989.789 - 41228.102:   99.7848%  (        4)
00:10:24.892  41228.102 - 41466.415:   99.8316%  (        5)
00:10:24.892  41466.415 - 41704.727:   99.8690%  (        4)
00:10:24.892  41704.727 - 41943.040:   99.9158%  (        5)
00:10:24.892  41943.040 - 42181.353:   99.9532%  (        4)
00:10:24.892  42181.353 - 42419.665:  100.0000%  (        5)
00:10:24.892  
00:10:24.892  Latency histogram for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:24.892  ==============================================================================
00:10:24.892         Range in us     Cumulative    IO count
00:10:24.892   7268.538 -  7298.327:    0.0187%  (        2)
00:10:24.892   7298.327 -  7328.116:    0.0374%  (        2)
00:10:24.892   7328.116 -  7357.905:    0.0561%  (        2)
00:10:24.892   7357.905 -  7387.695:    0.0749%  (        2)
00:10:24.892   7387.695 -  7417.484:    0.1123%  (        4)
00:10:24.892   7417.484 -  7447.273:    0.1497%  (        4)
00:10:24.892   7447.273 -  7477.062:    0.2246%  (        8)
00:10:24.892   7477.062 -  7506.851:    0.2994%  (        8)
00:10:24.892   7506.851 -  7536.640:    0.3743%  (        8)
00:10:24.892   7536.640 -  7566.429:    0.4865%  (       12)
00:10:24.892   7566.429 -  7596.218:    0.5894%  (       11)
00:10:24.892   7596.218 -  7626.007:    0.7017%  (       12)
00:10:24.892   7626.007 -  7685.585:    1.0011%  (       32)
00:10:24.892   7685.585 -  7745.164:    1.2912%  (       31)
00:10:24.892   7745.164 -  7804.742:    1.6093%  (       34)
00:10:24.892   7804.742 -  7864.320:    1.9461%  (       36)
00:10:24.892   7864.320 -  7923.898:    2.2736%  (       35)
00:10:24.892   7923.898 -  7983.476:    2.6291%  (       38)
00:10:24.892   7983.476 -  8043.055:    3.0408%  (       44)
00:10:24.892   8043.055 -  8102.633:    3.4431%  (       43)
00:10:24.892   8102.633 -  8162.211:    3.9016%  (       49)
00:10:24.892   8162.211 -  8221.789:    4.4255%  (       56)
00:10:24.892   8221.789 -  8281.367:    4.9121%  (       52)
00:10:24.892   8281.367 -  8340.945:    5.4921%  (       62)
00:10:24.892   8340.945 -  8400.524:    6.2313%  (       79)
00:10:24.892   8400.524 -  8460.102:    6.9985%  (       82)
00:10:24.892   8460.102 -  8519.680:    7.7844%  (       84)
00:10:24.892   8519.680 -  8579.258:    8.6733%  (       95)
00:10:24.892   8579.258 -  8638.836:    9.6089%  (      100)
00:10:24.892   8638.836 -  8698.415:   10.5820%  (      104)
00:10:24.892   8698.415 -  8757.993:   11.6299%  (      112)
00:10:24.892   8757.993 -  8817.571:   12.7620%  (      121)
00:10:24.892   8817.571 -  8877.149:   13.9502%  (      127)
00:10:24.892   8877.149 -  8936.727:   15.1010%  (      123)
00:10:24.892   8936.727 -  8996.305:   16.3174%  (      130)
00:10:24.892   8996.305 -  9055.884:   17.6085%  (      138)
00:10:24.892   9055.884 -  9115.462:   18.8716%  (      135)
00:10:24.892   9115.462 -  9175.040:   20.2470%  (      147)
00:10:24.892   9175.040 -  9234.618:   21.6785%  (      153)
00:10:24.892   9234.618 -  9294.196:   23.1942%  (      162)
00:10:24.892   9294.196 -  9353.775:   24.7661%  (      168)
00:10:24.892   9353.775 -  9413.353:   26.4502%  (      180)
00:10:24.892   9413.353 -  9472.931:   28.2186%  (      189)
00:10:24.892   9472.931 -  9532.509:   29.7530%  (      164)
00:10:24.892   9532.509 -  9592.087:   31.2594%  (      161)
00:10:24.892   9592.087 -  9651.665:   32.7751%  (      162)
00:10:24.892   9651.665 -  9711.244:   34.3282%  (      166)
00:10:24.892   9711.244 -  9770.822:   35.7878%  (      156)
00:10:24.892   9770.822 -  9830.400:   37.3877%  (      171)
00:10:24.892   9830.400 -  9889.978:   38.9689%  (      169)
00:10:24.892   9889.978 -  9949.556:   40.5782%  (      172)
00:10:24.892   9949.556 - 10009.135:   42.1875%  (      172)
00:10:24.892  10009.135 - 10068.713:   43.7406%  (      166)
00:10:24.892  10068.713 - 10128.291:   45.3406%  (      171)
00:10:24.892  10128.291 - 10187.869:   46.9686%  (      174)
00:10:24.892  10187.869 - 10247.447:   48.5498%  (      169)
00:10:24.892  10247.447 - 10307.025:   50.1403%  (      170)
00:10:24.892  10307.025 - 10366.604:   51.6841%  (      165)
00:10:24.892  10366.604 - 10426.182:   53.3028%  (      173)
00:10:24.892  10426.182 - 10485.760:   54.8559%  (      166)
00:10:24.892  10485.760 - 10545.338:   56.2594%  (      150)
00:10:24.892  10545.338 - 10604.916:   57.5599%  (      139)
00:10:24.892  10604.916 - 10664.495:   58.7575%  (      128)
00:10:24.892  10664.495 - 10724.073:   59.9270%  (      125)
00:10:24.892  10724.073 - 10783.651:   61.1246%  (      128)
00:10:24.892  10783.651 - 10843.229:   62.3503%  (      131)
00:10:24.892  10843.229 - 10902.807:   63.3421%  (      106)
00:10:24.892  10902.807 - 10962.385:   64.2777%  (      100)
00:10:24.892  10962.385 - 11021.964:   65.2040%  (       99)
00:10:24.892  11021.964 - 11081.542:   66.0835%  (       94)
00:10:24.892  11081.542 - 11141.120:   66.9068%  (       88)
00:10:24.892  11141.120 - 11200.698:   67.5337%  (       67)
00:10:24.892  11200.698 - 11260.276:   68.1886%  (       70)
00:10:24.892  11260.276 - 11319.855:   68.7968%  (       65)
00:10:24.892  11319.855 - 11379.433:   69.4237%  (       67)
00:10:24.892  11379.433 - 11439.011:   70.0225%  (       64)
00:10:24.892  11439.011 - 11498.589:   70.5371%  (       55)
00:10:24.892  11498.589 - 11558.167:   71.0236%  (       52)
00:10:24.892  11558.167 - 11617.745:   71.5382%  (       55)
00:10:24.892  11617.745 - 11677.324:   72.0434%  (       54)
00:10:24.892  11677.324 - 11736.902:   72.5206%  (       51)
00:10:24.892  11736.902 - 11796.480:   72.9323%  (       44)
00:10:24.892  11796.480 - 11856.058:   73.3439%  (       44)
00:10:24.892  11856.058 - 11915.636:   73.7182%  (       40)
00:10:24.892  11915.636 - 11975.215:   74.0550%  (       36)
00:10:24.892  11975.215 - 12034.793:   74.4480%  (       42)
00:10:24.892  12034.793 - 12094.371:   74.7006%  (       27)
00:10:24.892  12094.371 - 12153.949:   74.9813%  (       30)
00:10:24.892  12153.949 - 12213.527:   75.2713%  (       31)
00:10:24.892  12213.527 - 12273.105:   75.5988%  (       35)
00:10:24.892  12273.105 - 12332.684:   75.9263%  (       35)
00:10:24.892  12332.684 - 12392.262:   76.2818%  (       38)
00:10:24.892  12392.262 - 12451.840:   76.6280%  (       37)
00:10:24.892  12451.840 - 12511.418:   76.9555%  (       35)
00:10:24.892  12511.418 - 12570.996:   77.2642%  (       33)
00:10:24.892  12570.996 - 12630.575:   77.5823%  (       34)
00:10:24.892  12630.575 - 12690.153:   77.8911%  (       33)
00:10:24.892  12690.153 - 12749.731:   78.1999%  (       33)
00:10:24.892  12749.731 - 12809.309:   78.5180%  (       34)
00:10:24.892  12809.309 - 12868.887:   78.8361%  (       34)
00:10:24.892  12868.887 - 12928.465:   79.1542%  (       34)
00:10:24.892  12928.465 - 12988.044:   79.4723%  (       34)
00:10:24.892  12988.044 - 13047.622:   79.7904%  (       34)
00:10:24.892  13047.622 - 13107.200:   80.0805%  (       31)
00:10:24.892  13107.200 - 13166.778:   80.3892%  (       33)
00:10:24.892  13166.778 - 13226.356:   80.7073%  (       34)
00:10:24.892  13226.356 - 13285.935:   81.0442%  (       36)
00:10:24.892  13285.935 - 13345.513:   81.3155%  (       29)
00:10:24.892  13345.513 - 13405.091:   81.5775%  (       28)
00:10:24.892  13405.091 - 13464.669:   81.8301%  (       27)
00:10:24.892  13464.669 - 13524.247:   82.0453%  (       23)
00:10:24.892  13524.247 - 13583.825:   82.2418%  (       21)
00:10:24.892  13583.825 - 13643.404:   82.4008%  (       17)
00:10:24.892  13643.404 - 13702.982:   82.5599%  (       17)
00:10:24.892  13702.982 - 13762.560:   82.7189%  (       17)
00:10:24.892  13762.560 - 13822.138:   82.8499%  (       14)
00:10:24.892  13822.138 - 13881.716:   82.9622%  (       12)
00:10:24.892  13881.716 - 13941.295:   83.0464%  (        9)
00:10:24.892  13941.295 - 14000.873:   83.0932%  (        5)
00:10:24.892  14000.873 - 14060.451:   83.1587%  (        7)
00:10:24.892  14060.451 - 14120.029:   83.2335%  (        8)
00:10:24.892  14120.029 - 14179.607:   83.2990%  (        7)
00:10:24.892  14179.607 - 14239.185:   83.3645%  (        7)
00:10:24.892  14239.185 - 14298.764:   83.4394%  (        8)
00:10:24.892  14298.764 - 14358.342:   83.4674%  (        3)
00:10:24.892  14358.342 - 14417.920:   83.4955%  (        3)
00:10:24.892  14417.920 - 14477.498:   83.5797%  (        9)
00:10:24.892  14477.498 - 14537.076:   83.6639%  (        9)
00:10:24.892  14537.076 - 14596.655:   83.7294%  (        7)
00:10:24.892  14596.655 - 14656.233:   83.8323%  (       11)
00:10:24.892  14656.233 - 14715.811:   83.8978%  (        7)
00:10:24.892  14715.811 - 14775.389:   83.9914%  (       10)
00:10:24.892  14775.389 - 14834.967:   84.0662%  (        8)
00:10:24.892  14834.967 - 14894.545:   84.1411%  (        8)
00:10:24.892  14894.545 - 14954.124:   84.2440%  (       11)
00:10:24.892  14954.124 - 15013.702:   84.3563%  (       12)
00:10:24.892  15013.702 - 15073.280:   84.4686%  (       12)
00:10:24.892  15073.280 - 15132.858:   84.5808%  (       12)
00:10:24.892  15132.858 - 15192.436:   84.6931%  (       12)
00:10:24.892  15192.436 - 15252.015:   84.8522%  (       17)
00:10:24.892  15252.015 - 15371.171:   85.1516%  (       32)
00:10:24.892  15371.171 - 15490.327:   85.4510%  (       32)
00:10:24.892  15490.327 - 15609.484:   85.7223%  (       29)
00:10:24.892  15609.484 - 15728.640:   85.9749%  (       27)
00:10:24.892  15728.640 - 15847.796:   86.1901%  (       23)
00:10:24.892  15847.796 - 15966.953:   86.4053%  (       23)
00:10:24.892  15966.953 - 16086.109:   86.6486%  (       26)
00:10:24.892  16086.109 - 16205.265:   86.9199%  (       29)
00:10:24.892  16205.265 - 16324.422:   87.1538%  (       25)
00:10:24.893  16324.422 - 16443.578:   87.3690%  (       23)
00:10:24.893  16443.578 - 16562.735:   87.5561%  (       20)
00:10:24.893  16562.735 - 16681.891:   87.7246%  (       18)
00:10:24.893  16681.891 - 16801.047:   87.9023%  (       19)
00:10:24.893  16801.047 - 16920.204:   88.0801%  (       19)
00:10:24.893  16920.204 - 17039.360:   88.2766%  (       21)
00:10:24.893  17039.360 - 17158.516:   88.4637%  (       20)
00:10:24.893  17158.516 - 17277.673:   88.5760%  (       12)
00:10:24.893  17277.673 - 17396.829:   88.6508%  (        8)
00:10:24.893  17396.829 - 17515.985:   88.7350%  (        9)
00:10:24.893  17515.985 - 17635.142:   88.8005%  (        7)
00:10:24.893  17635.142 - 17754.298:   88.8192%  (        2)
00:10:24.893  17754.298 - 17873.455:   88.9034%  (        9)
00:10:24.893  17873.455 - 17992.611:   88.9970%  (       10)
00:10:24.893  17992.611 - 18111.767:   89.0999%  (       11)
00:10:24.893  18111.767 - 18230.924:   89.2122%  (       12)
00:10:24.893  18230.924 - 18350.080:   89.3151%  (       11)
00:10:24.893  18350.080 - 18469.236:   89.4180%  (       11)
00:10:24.893  18469.236 - 18588.393:   89.5490%  (       14)
00:10:24.893  18588.393 - 18707.549:   89.6519%  (       11)
00:10:24.893  18707.549 - 18826.705:   89.7923%  (       15)
00:10:24.893  18826.705 - 18945.862:   89.9139%  (       13)
00:10:24.893  18945.862 - 19065.018:   90.0356%  (       13)
00:10:24.893  19065.018 - 19184.175:   90.1478%  (       12)
00:10:24.893  19184.175 - 19303.331:   90.2788%  (       14)
00:10:24.893  19303.331 - 19422.487:   90.4004%  (       13)
00:10:24.893  19422.487 - 19541.644:   90.5221%  (       13)
00:10:24.893  19541.644 - 19660.800:   90.6531%  (       14)
00:10:24.893  19660.800 - 19779.956:   90.7841%  (       14)
00:10:24.893  19779.956 - 19899.113:   90.8870%  (       11)
00:10:24.893  19899.113 - 20018.269:   91.0460%  (       17)
00:10:24.893  20018.269 - 20137.425:   91.2144%  (       18)
00:10:24.893  20137.425 - 20256.582:   91.4109%  (       21)
00:10:24.893  20256.582 - 20375.738:   91.5793%  (       18)
00:10:24.893  20375.738 - 20494.895:   91.7478%  (       18)
00:10:24.893  20494.895 - 20614.051:   91.9629%  (       23)
00:10:24.893  20614.051 - 20733.207:   92.1688%  (       22)
00:10:24.893  20733.207 - 20852.364:   92.4214%  (       27)
00:10:24.893  20852.364 - 20971.520:   92.6834%  (       28)
00:10:24.893  20971.520 - 21090.676:   92.9454%  (       28)
00:10:24.893  21090.676 - 21209.833:   93.1886%  (       26)
00:10:24.893  21209.833 - 21328.989:   93.4225%  (       25)
00:10:24.893  21328.989 - 21448.145:   93.6939%  (       29)
00:10:24.893  21448.145 - 21567.302:   93.9465%  (       27)
00:10:24.893  21567.302 - 21686.458:   94.2085%  (       28)
00:10:24.893  21686.458 - 21805.615:   94.4424%  (       25)
00:10:24.893  21805.615 - 21924.771:   94.7043%  (       28)
00:10:24.893  21924.771 - 22043.927:   94.9663%  (       28)
00:10:24.893  22043.927 - 22163.084:   95.2283%  (       28)
00:10:24.893  22163.084 - 22282.240:   95.4622%  (       25)
00:10:24.893  22282.240 - 22401.396:   95.7710%  (       33)
00:10:24.893  22401.396 - 22520.553:   96.0610%  (       31)
00:10:24.893  22520.553 - 22639.709:   96.3323%  (       29)
00:10:24.893  22639.709 - 22758.865:   96.6224%  (       31)
00:10:24.893  22758.865 - 22878.022:   96.8937%  (       29)
00:10:24.893  22878.022 - 22997.178:   97.1557%  (       28)
00:10:24.893  22997.178 - 23116.335:   97.4270%  (       29)
00:10:24.893  23116.335 - 23235.491:   97.6422%  (       23)
00:10:24.893  23235.491 - 23354.647:   97.8761%  (       25)
00:10:24.893  23354.647 - 23473.804:   98.0820%  (       22)
00:10:24.893  23473.804 - 23592.960:   98.2317%  (       16)
00:10:24.893  23592.960 - 23712.116:   98.3439%  (       12)
00:10:24.893  23712.116 - 23831.273:   98.4562%  (       12)
00:10:24.893  23831.273 - 23950.429:   98.5498%  (       10)
00:10:24.893  23950.429 - 24069.585:   98.6153%  (        7)
00:10:24.893  24069.585 - 24188.742:   98.6901%  (        8)
00:10:24.893  24188.742 - 24307.898:   98.7369%  (        5)
00:10:24.893  24307.898 - 24427.055:   98.7837%  (        5)
00:10:24.893  24427.055 - 24546.211:   98.8305%  (        5)
00:10:24.893  24546.211 - 24665.367:   98.8772%  (        5)
00:10:24.893  24665.367 - 24784.524:   98.8960%  (        2)
00:10:24.893  24784.524 - 24903.680:   98.9147%  (        2)
00:10:24.893  24903.680 - 25022.836:   98.9334%  (        2)
00:10:24.893  25022.836 - 25141.993:   98.9521%  (        2)
00:10:24.893  25141.993 - 25261.149:   98.9708%  (        2)
00:10:24.893  25261.149 - 25380.305:   98.9895%  (        2)
00:10:24.893  25380.305 - 25499.462:   99.0082%  (        2)
00:10:24.893  25499.462 - 25618.618:   99.0269%  (        2)
00:10:24.893  25618.618 - 25737.775:   99.0457%  (        2)
00:10:24.893  25737.775 - 25856.931:   99.0644%  (        2)
00:10:24.893  25856.931 - 25976.087:   99.0831%  (        2)
00:10:24.893  25976.087 - 26095.244:   99.1018%  (        2)
00:10:24.893  26095.244 - 26214.400:   99.1205%  (        2)
00:10:24.893  26214.400 - 26333.556:   99.1392%  (        2)
00:10:24.893  26333.556 - 26452.713:   99.1579%  (        2)
00:10:24.893  26452.713 - 26571.869:   99.1766%  (        2)
00:10:24.893  26571.869 - 26691.025:   99.1860%  (        1)
00:10:24.893  26691.025 - 26810.182:   99.2047%  (        2)
00:10:24.893  26810.182 - 26929.338:   99.2234%  (        2)
00:10:24.893  26929.338 - 27048.495:   99.2421%  (        2)
00:10:24.893  27048.495 - 27167.651:   99.2515%  (        1)
00:10:24.893  27167.651 - 27286.807:   99.2702%  (        2)
00:10:24.893  27286.807 - 27405.964:   99.2889%  (        2)
00:10:24.893  27405.964 - 27525.120:   99.3076%  (        2)
00:10:24.893  27525.120 - 27644.276:   99.3263%  (        2)
00:10:24.893  27644.276 - 27763.433:   99.3451%  (        2)
00:10:24.893  27763.433 - 27882.589:   99.3638%  (        2)
00:10:24.893  27882.589 - 28001.745:   99.3825%  (        2)
00:10:24.893  28001.745 - 28120.902:   99.4012%  (        2)
00:10:24.893  33602.095 - 33840.407:   99.4293%  (        3)
00:10:24.893  33840.407 - 34078.720:   99.4667%  (        4)
00:10:24.893  34078.720 - 34317.033:   99.5041%  (        4)
00:10:24.893  34317.033 - 34555.345:   99.5509%  (        5)
00:10:24.893  34555.345 - 34793.658:   99.5883%  (        4)
00:10:24.893  34793.658 - 35031.971:   99.6351%  (        5)
00:10:24.893  35031.971 - 35270.284:   99.6725%  (        4)
00:10:24.893  35270.284 - 35508.596:   99.7100%  (        4)
00:10:24.893  35508.596 - 35746.909:   99.7474%  (        4)
00:10:24.893  35746.909 - 35985.222:   99.7942%  (        5)
00:10:24.893  35985.222 - 36223.535:   99.8316%  (        4)
00:10:24.893  36223.535 - 36461.847:   99.8784%  (        5)
00:10:24.893  36461.847 - 36700.160:   99.9158%  (        4)
00:10:24.893  36700.160 - 36938.473:   99.9532%  (        4)
00:10:24.893  36938.473 - 37176.785:  100.0000%  (        5)
00:10:24.893  
00:10:24.893  Latency histogram for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:24.893  ==============================================================================
00:10:24.893         Range in us     Cumulative    IO count
00:10:24.893   7268.538 -  7298.327:    0.0187%  (        2)
00:10:24.893   7298.327 -  7328.116:    0.0281%  (        1)
00:10:24.893   7328.116 -  7357.905:    0.0468%  (        2)
00:10:24.893   7357.905 -  7387.695:    0.0749%  (        3)
00:10:24.893   7387.695 -  7417.484:    0.1123%  (        4)
00:10:24.893   7417.484 -  7447.273:    0.1684%  (        6)
00:10:24.893   7447.273 -  7477.062:    0.2152%  (        5)
00:10:24.893   7477.062 -  7506.851:    0.2713%  (        6)
00:10:24.893   7506.851 -  7536.640:    0.3368%  (        7)
00:10:24.893   7536.640 -  7566.429:    0.4210%  (        9)
00:10:24.893   7566.429 -  7596.218:    0.5240%  (       11)
00:10:24.893   7596.218 -  7626.007:    0.6456%  (       13)
00:10:24.893   7626.007 -  7685.585:    0.9169%  (       29)
00:10:24.893   7685.585 -  7745.164:    1.2631%  (       37)
00:10:24.893   7745.164 -  7804.742:    1.5906%  (       35)
00:10:24.893   7804.742 -  7864.320:    1.9274%  (       36)
00:10:24.893   7864.320 -  7923.898:    2.3110%  (       41)
00:10:24.893   7923.898 -  7983.476:    2.6946%  (       41)
00:10:24.893   7983.476 -  8043.055:    3.0408%  (       37)
00:10:24.893   8043.055 -  8102.633:    3.4618%  (       45)
00:10:24.893   8102.633 -  8162.211:    3.9203%  (       49)
00:10:24.893   8162.211 -  8221.789:    4.4068%  (       52)
00:10:24.893   8221.789 -  8281.367:    4.9308%  (       56)
00:10:24.893   8281.367 -  8340.945:    5.5483%  (       66)
00:10:24.893   8340.945 -  8400.524:    6.2874%  (       79)
00:10:24.893   8400.524 -  8460.102:    7.1482%  (       92)
00:10:24.893   8460.102 -  8519.680:    8.0558%  (       97)
00:10:24.893   8519.680 -  8579.258:    8.8698%  (       87)
00:10:24.893   8579.258 -  8638.836:    9.8054%  (      100)
00:10:24.893   8638.836 -  8698.415:   10.7223%  (       98)
00:10:24.893   8698.415 -  8757.993:   11.6673%  (      101)
00:10:24.893   8757.993 -  8817.571:   12.6310%  (      103)
00:10:24.893   8817.571 -  8877.149:   13.6321%  (      107)
00:10:24.893   8877.149 -  8936.727:   14.8016%  (      125)
00:10:24.893   8936.727 -  8996.305:   16.0460%  (      133)
00:10:24.893   8996.305 -  9055.884:   17.3559%  (      140)
00:10:24.893   9055.884 -  9115.462:   18.6845%  (      142)
00:10:24.893   9115.462 -  9175.040:   20.2002%  (      162)
00:10:24.893   9175.040 -  9234.618:   21.7721%  (      168)
00:10:24.893   9234.618 -  9294.196:   23.4281%  (      177)
00:10:24.893   9294.196 -  9353.775:   25.1029%  (      179)
00:10:24.893   9353.775 -  9413.353:   26.8806%  (      190)
00:10:24.893   9413.353 -  9472.931:   28.5273%  (      176)
00:10:24.893   9472.931 -  9532.509:   30.1179%  (      170)
00:10:24.893   9532.509 -  9592.087:   31.5962%  (      158)
00:10:24.893   9592.087 -  9651.665:   33.0651%  (      157)
00:10:24.893   9651.665 -  9711.244:   34.6744%  (      172)
00:10:24.893   9711.244 -  9770.822:   36.2463%  (      168)
00:10:24.893   9770.822 -  9830.400:   37.7433%  (      160)
00:10:24.893   9830.400 -  9889.978:   39.4087%  (      178)
00:10:24.893   9889.978 -  9949.556:   40.9993%  (      170)
00:10:24.893   9949.556 - 10009.135:   42.6460%  (      176)
00:10:24.893  10009.135 - 10068.713:   44.2272%  (      169)
00:10:24.893  10068.713 - 10128.291:   45.7429%  (      162)
00:10:24.893  10128.291 - 10187.869:   47.2493%  (      161)
00:10:24.893  10187.869 - 10247.447:   48.9334%  (      180)
00:10:24.893  10247.447 - 10307.025:   50.3649%  (      153)
00:10:24.893  10307.025 - 10366.604:   51.8900%  (      163)
00:10:24.893  10366.604 - 10426.182:   53.4525%  (      167)
00:10:24.893  10426.182 - 10485.760:   54.7530%  (      139)
00:10:24.893  10485.760 - 10545.338:   56.1003%  (      144)
00:10:24.893  10545.338 - 10604.916:   57.3166%  (      130)
00:10:24.893  10604.916 - 10664.495:   58.2803%  (      103)
00:10:24.893  10664.495 - 10724.073:   59.2534%  (      104)
00:10:24.893  10724.073 - 10783.651:   60.2732%  (      109)
00:10:24.893  10783.651 - 10843.229:   61.3679%  (      117)
00:10:24.893  10843.229 - 10902.807:   62.3409%  (      104)
00:10:24.893  10902.807 - 10962.385:   63.3046%  (      103)
00:10:24.893  10962.385 - 11021.964:   64.1186%  (       87)
00:10:24.893  11021.964 - 11081.542:   64.9046%  (       84)
00:10:24.893  11081.542 - 11141.120:   65.6250%  (       77)
00:10:24.894  11141.120 - 11200.698:   66.2987%  (       72)
00:10:24.894  11200.698 - 11260.276:   66.8975%  (       64)
00:10:24.894  11260.276 - 11319.855:   67.4401%  (       58)
00:10:24.894  11319.855 - 11379.433:   68.0670%  (       67)
00:10:24.894  11379.433 - 11439.011:   68.6284%  (       60)
00:10:24.894  11439.011 - 11498.589:   69.1149%  (       52)
00:10:24.894  11498.589 - 11558.167:   69.5827%  (       50)
00:10:24.894  11558.167 - 11617.745:   70.0318%  (       48)
00:10:24.894  11617.745 - 11677.324:   70.4154%  (       41)
00:10:24.894  11677.324 - 11736.902:   70.7803%  (       39)
00:10:24.894  11736.902 - 11796.480:   71.1920%  (       44)
00:10:24.894  11796.480 - 11856.058:   71.6130%  (       45)
00:10:24.894  11856.058 - 11915.636:   71.9966%  (       41)
00:10:24.894  11915.636 - 11975.215:   72.3802%  (       41)
00:10:24.894  11975.215 - 12034.793:   72.8200%  (       47)
00:10:24.894  12034.793 - 12094.371:   73.2504%  (       46)
00:10:24.894  12094.371 - 12153.949:   73.6901%  (       47)
00:10:24.894  12153.949 - 12213.527:   74.1112%  (       45)
00:10:24.894  12213.527 - 12273.105:   74.5322%  (       45)
00:10:24.894  12273.105 - 12332.684:   74.9906%  (       49)
00:10:24.894  12332.684 - 12392.262:   75.4210%  (       46)
00:10:24.894  12392.262 - 12451.840:   75.8608%  (       47)
00:10:24.894  12451.840 - 12511.418:   76.2537%  (       42)
00:10:24.894  12511.418 - 12570.996:   76.6280%  (       40)
00:10:24.894  12570.996 - 12630.575:   76.9648%  (       36)
00:10:24.894  12630.575 - 12690.153:   77.3297%  (       39)
00:10:24.894  12690.153 - 12749.731:   77.6759%  (       37)
00:10:24.894  12749.731 - 12809.309:   78.0501%  (       40)
00:10:24.894  12809.309 - 12868.887:   78.4244%  (       40)
00:10:24.894  12868.887 - 12928.465:   78.7519%  (       35)
00:10:24.894  12928.465 - 12988.044:   79.1261%  (       40)
00:10:24.894  12988.044 - 13047.622:   79.5097%  (       41)
00:10:24.894  13047.622 - 13107.200:   79.9121%  (       43)
00:10:24.894  13107.200 - 13166.778:   80.2863%  (       40)
00:10:24.894  13166.778 - 13226.356:   80.5951%  (       33)
00:10:24.894  13226.356 - 13285.935:   80.9506%  (       38)
00:10:24.894  13285.935 - 13345.513:   81.2874%  (       36)
00:10:24.894  13345.513 - 13405.091:   81.5588%  (       29)
00:10:24.894  13405.091 - 13464.669:   81.8207%  (       28)
00:10:24.894  13464.669 - 13524.247:   82.0734%  (       27)
00:10:24.894  13524.247 - 13583.825:   82.3073%  (       25)
00:10:24.894  13583.825 - 13643.404:   82.5037%  (       21)
00:10:24.894  13643.404 - 13702.982:   82.6722%  (       18)
00:10:24.894  13702.982 - 13762.560:   82.8499%  (       19)
00:10:24.894  13762.560 - 13822.138:   83.0464%  (       21)
00:10:24.894  13822.138 - 13881.716:   83.1868%  (       15)
00:10:24.894  13881.716 - 13941.295:   83.3645%  (       19)
00:10:24.894  13941.295 - 14000.873:   83.5049%  (       15)
00:10:24.894  14000.873 - 14060.451:   83.6733%  (       18)
00:10:24.894  14060.451 - 14120.029:   83.8323%  (       17)
00:10:24.894  14120.029 - 14179.607:   83.9820%  (       16)
00:10:24.894  14179.607 - 14239.185:   84.1785%  (       21)
00:10:24.894  14239.185 - 14298.764:   84.3282%  (       16)
00:10:24.894  14298.764 - 14358.342:   84.4966%  (       18)
00:10:24.894  14358.342 - 14417.920:   84.6370%  (       15)
00:10:24.894  14417.920 - 14477.498:   84.7867%  (       16)
00:10:24.894  14477.498 - 14537.076:   84.9177%  (       14)
00:10:24.894  14537.076 - 14596.655:   85.0674%  (       16)
00:10:24.894  14596.655 - 14656.233:   85.2264%  (       17)
00:10:24.894  14656.233 - 14715.811:   85.4042%  (       19)
00:10:24.894  14715.811 - 14775.389:   85.5726%  (       18)
00:10:24.894  14775.389 - 14834.967:   85.7410%  (       18)
00:10:24.894  14834.967 - 14894.545:   85.9375%  (       21)
00:10:24.894  14894.545 - 14954.124:   86.0778%  (       15)
00:10:24.894  14954.124 - 15013.702:   86.2182%  (       15)
00:10:24.894  15013.702 - 15073.280:   86.3398%  (       13)
00:10:24.894  15073.280 - 15132.858:   86.4802%  (       15)
00:10:24.894  15132.858 - 15192.436:   86.5831%  (       11)
00:10:24.894  15192.436 - 15252.015:   86.6673%  (        9)
00:10:24.894  15252.015 - 15371.171:   86.8170%  (       16)
00:10:24.894  15371.171 - 15490.327:   86.9199%  (       11)
00:10:24.894  15490.327 - 15609.484:   87.0322%  (       12)
00:10:24.894  15609.484 - 15728.640:   87.1632%  (       14)
00:10:24.894  15728.640 - 15847.796:   87.2661%  (       11)
00:10:24.894  15847.796 - 15966.953:   87.3316%  (        7)
00:10:24.894  15966.953 - 16086.109:   87.3784%  (        5)
00:10:24.894  16086.109 - 16205.265:   87.4626%  (        9)
00:10:24.894  16205.265 - 16324.422:   87.5561%  (       10)
00:10:24.894  16324.422 - 16443.578:   87.6497%  (       10)
00:10:24.894  16443.578 - 16562.735:   87.7433%  (       10)
00:10:24.894  16562.735 - 16681.891:   87.8181%  (        8)
00:10:24.894  16681.891 - 16801.047:   87.9023%  (        9)
00:10:24.894  16801.047 - 16920.204:   87.9959%  (       10)
00:10:24.894  16920.204 - 17039.360:   88.0988%  (       11)
00:10:24.894  17039.360 - 17158.516:   88.2017%  (       11)
00:10:24.894  17158.516 - 17277.673:   88.3046%  (       11)
00:10:24.894  17277.673 - 17396.829:   88.4169%  (       12)
00:10:24.894  17396.829 - 17515.985:   88.5292%  (       12)
00:10:24.894  17515.985 - 17635.142:   88.6508%  (       13)
00:10:24.894  17635.142 - 17754.298:   88.7725%  (       13)
00:10:24.894  17754.298 - 17873.455:   88.8847%  (       12)
00:10:24.894  17873.455 - 17992.611:   88.9783%  (       10)
00:10:24.894  17992.611 - 18111.767:   89.0719%  (       10)
00:10:24.894  18111.767 - 18230.924:   89.1561%  (        9)
00:10:24.894  18230.924 - 18350.080:   89.2496%  (       10)
00:10:24.894  18350.080 - 18469.236:   89.3338%  (        9)
00:10:24.894  18469.236 - 18588.393:   89.4180%  (        9)
00:10:24.894  18588.393 - 18707.549:   89.4742%  (        6)
00:10:24.894  18707.549 - 18826.705:   89.5210%  (        5)
00:10:24.894  18826.705 - 18945.862:   89.5397%  (        2)
00:10:24.894  18945.862 - 19065.018:   89.5865%  (        5)
00:10:24.894  19065.018 - 19184.175:   89.6707%  (        9)
00:10:24.894  19184.175 - 19303.331:   89.7642%  (       10)
00:10:24.894  19303.331 - 19422.487:   89.8578%  (       10)
00:10:24.894  19422.487 - 19541.644:   89.9513%  (       10)
00:10:24.894  19541.644 - 19660.800:   90.0917%  (       15)
00:10:24.894  19660.800 - 19779.956:   90.2133%  (       13)
00:10:24.894  19779.956 - 19899.113:   90.3537%  (       15)
00:10:24.894  19899.113 - 20018.269:   90.5221%  (       18)
00:10:24.894  20018.269 - 20137.425:   90.7373%  (       23)
00:10:24.894  20137.425 - 20256.582:   90.9993%  (       28)
00:10:24.894  20256.582 - 20375.738:   91.2238%  (       24)
00:10:24.894  20375.738 - 20494.895:   91.4764%  (       27)
00:10:24.894  20494.895 - 20614.051:   91.7571%  (       30)
00:10:24.894  20614.051 - 20733.207:   92.0378%  (       30)
00:10:24.894  20733.207 - 20852.364:   92.3653%  (       35)
00:10:24.894  20852.364 - 20971.520:   92.7115%  (       37)
00:10:24.894  20971.520 - 21090.676:   93.0483%  (       36)
00:10:24.894  21090.676 - 21209.833:   93.4132%  (       39)
00:10:24.894  21209.833 - 21328.989:   93.7594%  (       37)
00:10:24.894  21328.989 - 21448.145:   94.0775%  (       34)
00:10:24.894  21448.145 - 21567.302:   94.3769%  (       32)
00:10:24.894  21567.302 - 21686.458:   94.6856%  (       33)
00:10:24.894  21686.458 - 21805.615:   94.9850%  (       32)
00:10:24.894  21805.615 - 21924.771:   95.2564%  (       29)
00:10:24.894  21924.771 - 22043.927:   95.5745%  (       34)
00:10:24.894  22043.927 - 22163.084:   95.9019%  (       35)
00:10:24.894  22163.084 - 22282.240:   96.2107%  (       33)
00:10:24.894  22282.240 - 22401.396:   96.5007%  (       31)
00:10:24.894  22401.396 - 22520.553:   96.7908%  (       31)
00:10:24.894  22520.553 - 22639.709:   97.0902%  (       32)
00:10:24.894  22639.709 - 22758.865:   97.3896%  (       32)
00:10:24.894  22758.865 - 22878.022:   97.6796%  (       31)
00:10:24.894  22878.022 - 22997.178:   97.9416%  (       28)
00:10:24.894  22997.178 - 23116.335:   98.2317%  (       31)
00:10:24.894  23116.335 - 23235.491:   98.4562%  (       24)
00:10:24.894  23235.491 - 23354.647:   98.7088%  (       27)
00:10:24.894  23354.647 - 23473.804:   98.9427%  (       25)
00:10:24.894  23473.804 - 23592.960:   99.0644%  (       13)
00:10:24.894  23592.960 - 23712.116:   99.1766%  (       12)
00:10:24.894  23712.116 - 23831.273:   99.3076%  (       14)
00:10:24.894  23831.273 - 23950.429:   99.3731%  (        7)
00:10:24.894  23950.429 - 24069.585:   99.3918%  (        2)
00:10:24.894  24069.585 - 24188.742:   99.4012%  (        1)
00:10:24.894  28359.215 - 28478.371:   99.4106%  (        1)
00:10:24.894  28478.371 - 28597.527:   99.4293%  (        2)
00:10:24.894  28597.527 - 28716.684:   99.4480%  (        2)
00:10:24.894  28716.684 - 28835.840:   99.4667%  (        2)
00:10:24.894  28835.840 - 28954.996:   99.4948%  (        3)
00:10:24.894  28954.996 - 29074.153:   99.5135%  (        2)
00:10:24.894  29074.153 - 29193.309:   99.5322%  (        2)
00:10:24.894  29193.309 - 29312.465:   99.5509%  (        2)
00:10:24.894  29312.465 - 29431.622:   99.5696%  (        2)
00:10:24.894  29431.622 - 29550.778:   99.5883%  (        2)
00:10:24.894  29550.778 - 29669.935:   99.6164%  (        3)
00:10:24.894  29669.935 - 29789.091:   99.6351%  (        2)
00:10:24.894  29789.091 - 29908.247:   99.6538%  (        2)
00:10:24.894  29908.247 - 30027.404:   99.6725%  (        2)
00:10:24.894  30027.404 - 30146.560:   99.6912%  (        2)
00:10:24.894  30146.560 - 30265.716:   99.7100%  (        2)
00:10:24.894  30265.716 - 30384.873:   99.7287%  (        2)
00:10:24.894  30384.873 - 30504.029:   99.7567%  (        3)
00:10:24.894  30504.029 - 30742.342:   99.7942%  (        4)
00:10:24.894  30742.342 - 30980.655:   99.8316%  (        4)
00:10:24.894  30980.655 - 31218.967:   99.8784%  (        5)
00:10:24.894  31218.967 - 31457.280:   99.9158%  (        4)
00:10:24.894  31457.280 - 31695.593:   99.9532%  (        4)
00:10:24.894  31695.593 - 31933.905:  100.0000%  (        5)
00:10:24.894  
00:10:24.894  Latency histogram for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:24.894  ==============================================================================
00:10:24.894         Range in us     Cumulative    IO count
00:10:24.894   7268.538 -  7298.327:    0.0187%  (        2)
00:10:24.894   7298.327 -  7328.116:    0.0468%  (        3)
00:10:24.894   7328.116 -  7357.905:    0.0842%  (        4)
00:10:24.894   7357.905 -  7387.695:    0.1310%  (        5)
00:10:24.894   7387.695 -  7417.484:    0.1591%  (        3)
00:10:24.894   7417.484 -  7447.273:    0.2058%  (        5)
00:10:24.894   7447.273 -  7477.062:    0.2620%  (        6)
00:10:24.894   7477.062 -  7506.851:    0.3275%  (        7)
00:10:24.894   7506.851 -  7536.640:    0.4117%  (        9)
00:10:24.894   7536.640 -  7566.429:    0.4959%  (        9)
00:10:24.894   7566.429 -  7596.218:    0.6082%  (       12)
00:10:24.894   7596.218 -  7626.007:    0.7579%  (       16)
00:10:24.894   7626.007 -  7685.585:    1.0666%  (       33)
00:10:24.895   7685.585 -  7745.164:    1.3847%  (       34)
00:10:24.895   7745.164 -  7804.742:    1.6748%  (       31)
00:10:24.895   7804.742 -  7864.320:    2.0022%  (       35)
00:10:24.895   7864.320 -  7923.898:    2.3578%  (       38)
00:10:24.895   7923.898 -  7983.476:    2.6665%  (       33)
00:10:24.895   7983.476 -  8043.055:    2.9940%  (       35)
00:10:24.895   8043.055 -  8102.633:    3.4150%  (       45)
00:10:24.895   8102.633 -  8162.211:    3.8361%  (       45)
00:10:24.895   8162.211 -  8221.789:    4.2571%  (       45)
00:10:24.895   8221.789 -  8281.367:    4.7904%  (       57)
00:10:24.895   8281.367 -  8340.945:    5.3892%  (       64)
00:10:24.895   8340.945 -  8400.524:    6.1471%  (       81)
00:10:24.895   8400.524 -  8460.102:    7.1014%  (      102)
00:10:24.895   8460.102 -  8519.680:    7.9809%  (       94)
00:10:24.895   8519.680 -  8579.258:    8.9820%  (      107)
00:10:24.895   8579.258 -  8638.836:    9.9364%  (      102)
00:10:24.895   8638.836 -  8698.415:   10.7878%  (       91)
00:10:24.895   8698.415 -  8757.993:   11.7047%  (       98)
00:10:24.895   8757.993 -  8817.571:   12.7152%  (      108)
00:10:24.895   8817.571 -  8877.149:   13.8754%  (      124)
00:10:24.895   8877.149 -  8936.727:   15.0356%  (      124)
00:10:24.895   8936.727 -  8996.305:   16.2893%  (      134)
00:10:24.895   8996.305 -  9055.884:   17.6272%  (      143)
00:10:24.895   9055.884 -  9115.462:   19.0026%  (      147)
00:10:24.895   9115.462 -  9175.040:   20.4528%  (      155)
00:10:24.895   9175.040 -  9234.618:   22.0153%  (      167)
00:10:24.895   9234.618 -  9294.196:   23.6433%  (      174)
00:10:24.895   9294.196 -  9353.775:   25.3930%  (      187)
00:10:24.895   9353.775 -  9413.353:   27.2081%  (      194)
00:10:24.895   9413.353 -  9472.931:   29.0138%  (      193)
00:10:24.895   9472.931 -  9532.509:   30.6886%  (      179)
00:10:24.895   9532.509 -  9592.087:   32.3728%  (      180)
00:10:24.895   9592.087 -  9651.665:   33.9914%  (      173)
00:10:24.895   9651.665 -  9711.244:   35.6475%  (      177)
00:10:24.895   9711.244 -  9770.822:   37.2193%  (      168)
00:10:24.895   9770.822 -  9830.400:   38.8005%  (      169)
00:10:24.895   9830.400 -  9889.978:   40.4004%  (      171)
00:10:24.895   9889.978 -  9949.556:   42.0284%  (      174)
00:10:24.895   9949.556 - 10009.135:   43.5629%  (      164)
00:10:24.895  10009.135 - 10068.713:   45.1815%  (      173)
00:10:24.895  10068.713 - 10128.291:   46.6972%  (      162)
00:10:24.895  10128.291 - 10187.869:   48.2036%  (      161)
00:10:24.895  10187.869 - 10247.447:   49.6538%  (      155)
00:10:24.895  10247.447 - 10307.025:   51.2070%  (      166)
00:10:24.895  10307.025 - 10366.604:   52.6759%  (      157)
00:10:24.895  10366.604 - 10426.182:   54.1074%  (      153)
00:10:24.895  10426.182 - 10485.760:   55.4173%  (      140)
00:10:24.895  10485.760 - 10545.338:   56.5681%  (      123)
00:10:24.895  10545.338 - 10604.916:   57.7376%  (      125)
00:10:24.895  10604.916 - 10664.495:   58.8604%  (      120)
00:10:24.895  10664.495 - 10724.073:   59.8615%  (      107)
00:10:24.895  10724.073 - 10783.651:   60.8065%  (      101)
00:10:24.895  10783.651 - 10843.229:   61.7702%  (      103)
00:10:24.895  10843.229 - 10902.807:   62.6871%  (       98)
00:10:24.895  10902.807 - 10962.385:   63.5853%  (       96)
00:10:24.895  10962.385 - 11021.964:   64.4274%  (       90)
00:10:24.895  11021.964 - 11081.542:   65.1385%  (       76)
00:10:24.895  11081.542 - 11141.120:   65.8776%  (       79)
00:10:24.895  11141.120 - 11200.698:   66.6355%  (       81)
00:10:24.895  11200.698 - 11260.276:   67.2343%  (       64)
00:10:24.895  11260.276 - 11319.855:   67.7489%  (       55)
00:10:24.895  11319.855 - 11379.433:   68.2541%  (       54)
00:10:24.895  11379.433 - 11439.011:   68.6939%  (       47)
00:10:24.895  11439.011 - 11498.589:   69.1243%  (       46)
00:10:24.895  11498.589 - 11558.167:   69.5079%  (       41)
00:10:24.895  11558.167 - 11617.745:   69.8915%  (       41)
00:10:24.895  11617.745 - 11677.324:   70.3312%  (       47)
00:10:24.895  11677.324 - 11736.902:   70.7522%  (       45)
00:10:24.895  11736.902 - 11796.480:   71.1359%  (       41)
00:10:24.895  11796.480 - 11856.058:   71.5288%  (       42)
00:10:24.895  11856.058 - 11915.636:   71.9031%  (       40)
00:10:24.895  11915.636 - 11975.215:   72.2305%  (       35)
00:10:24.895  11975.215 - 12034.793:   72.5861%  (       38)
00:10:24.895  12034.793 - 12094.371:   72.9697%  (       41)
00:10:24.895  12094.371 - 12153.949:   73.3720%  (       43)
00:10:24.895  12153.949 - 12213.527:   73.7369%  (       39)
00:10:24.895  12213.527 - 12273.105:   74.0831%  (       37)
00:10:24.895  12273.105 - 12332.684:   74.4948%  (       44)
00:10:24.895  12332.684 - 12392.262:   74.9158%  (       45)
00:10:24.895  12392.262 - 12451.840:   75.3181%  (       43)
00:10:24.895  12451.840 - 12511.418:   75.7485%  (       46)
00:10:24.895  12511.418 - 12570.996:   76.1976%  (       48)
00:10:24.895  12570.996 - 12630.575:   76.6935%  (       53)
00:10:24.895  12630.575 - 12690.153:   77.1707%  (       51)
00:10:24.895  12690.153 - 12749.731:   77.5636%  (       42)
00:10:24.895  12749.731 - 12809.309:   77.9472%  (       41)
00:10:24.895  12809.309 - 12868.887:   78.3963%  (       48)
00:10:24.895  12868.887 - 12928.465:   78.7706%  (       40)
00:10:24.895  12928.465 - 12988.044:   79.1355%  (       39)
00:10:24.895  12988.044 - 13047.622:   79.4817%  (       37)
00:10:24.895  13047.622 - 13107.200:   79.8933%  (       44)
00:10:24.895  13107.200 - 13166.778:   80.2395%  (       37)
00:10:24.895  13166.778 - 13226.356:   80.6044%  (       39)
00:10:24.895  13226.356 - 13285.935:   80.9506%  (       37)
00:10:24.895  13285.935 - 13345.513:   81.2874%  (       36)
00:10:24.895  13345.513 - 13405.091:   81.6243%  (       36)
00:10:24.895  13405.091 - 13464.669:   81.9049%  (       30)
00:10:24.895  13464.669 - 13524.247:   82.1950%  (       31)
00:10:24.895  13524.247 - 13583.825:   82.4570%  (       28)
00:10:24.895  13583.825 - 13643.404:   82.6722%  (       23)
00:10:24.895  13643.404 - 13702.982:   82.8593%  (       20)
00:10:24.895  13702.982 - 13762.560:   83.0464%  (       20)
00:10:24.895  13762.560 - 13822.138:   83.2242%  (       19)
00:10:24.895  13822.138 - 13881.716:   83.3645%  (       15)
00:10:24.895  13881.716 - 13941.295:   83.5142%  (       16)
00:10:24.895  13941.295 - 14000.873:   83.6733%  (       17)
00:10:24.895  14000.873 - 14060.451:   83.8510%  (       19)
00:10:24.895  14060.451 - 14120.029:   84.0382%  (       20)
00:10:24.895  14120.029 - 14179.607:   84.2066%  (       18)
00:10:24.895  14179.607 - 14239.185:   84.3656%  (       17)
00:10:24.895  14239.185 - 14298.764:   84.5060%  (       15)
00:10:24.895  14298.764 - 14358.342:   84.6650%  (       17)
00:10:24.895  14358.342 - 14417.920:   84.8147%  (       16)
00:10:24.895  14417.920 - 14477.498:   84.9644%  (       16)
00:10:24.895  14477.498 - 14537.076:   85.1329%  (       18)
00:10:24.895  14537.076 - 14596.655:   85.3200%  (       20)
00:10:24.895  14596.655 - 14656.233:   85.4697%  (       16)
00:10:24.895  14656.233 - 14715.811:   85.6007%  (       14)
00:10:24.895  14715.811 - 14775.389:   85.7129%  (       12)
00:10:24.895  14775.389 - 14834.967:   85.8533%  (       15)
00:10:24.895  14834.967 - 14894.545:   85.9656%  (       12)
00:10:24.895  14894.545 - 14954.124:   86.1059%  (       15)
00:10:24.895  14954.124 - 15013.702:   86.2463%  (       15)
00:10:24.895  15013.702 - 15073.280:   86.3772%  (       14)
00:10:24.895  15073.280 - 15132.858:   86.5269%  (       16)
00:10:24.895  15132.858 - 15192.436:   86.6205%  (       10)
00:10:24.895  15192.436 - 15252.015:   86.7141%  (       10)
00:10:24.895  15252.015 - 15371.171:   86.8544%  (       15)
00:10:24.895  15371.171 - 15490.327:   86.9760%  (       13)
00:10:24.895  15490.327 - 15609.484:   87.0696%  (       10)
00:10:24.895  15609.484 - 15728.640:   87.2006%  (       14)
00:10:24.895  15728.640 - 15847.796:   87.3316%  (       14)
00:10:24.895  15847.796 - 15966.953:   87.4719%  (       15)
00:10:24.895  15966.953 - 16086.109:   87.6123%  (       15)
00:10:24.895  16086.109 - 16205.265:   87.7526%  (       15)
00:10:24.895  16205.265 - 16324.422:   87.8836%  (       14)
00:10:24.895  16324.422 - 16443.578:   87.9585%  (        8)
00:10:24.895  16443.578 - 16562.735:   88.0146%  (        6)
00:10:24.895  16562.735 - 16681.891:   88.0988%  (        9)
00:10:24.895  16681.891 - 16801.047:   88.1830%  (        9)
00:10:24.895  16801.047 - 16920.204:   88.2672%  (        9)
00:10:24.895  16920.204 - 17039.360:   88.3421%  (        8)
00:10:24.895  17039.360 - 17158.516:   88.4356%  (       10)
00:10:24.895  17158.516 - 17277.673:   88.5385%  (       11)
00:10:24.895  17277.673 - 17396.829:   88.6508%  (       12)
00:10:24.895  17396.829 - 17515.985:   88.7725%  (       13)
00:10:24.895  17515.985 - 17635.142:   88.8847%  (       12)
00:10:24.895  17635.142 - 17754.298:   89.0064%  (       13)
00:10:24.895  17754.298 - 17873.455:   89.0999%  (       10)
00:10:24.895  17873.455 - 17992.611:   89.1935%  (       10)
00:10:24.895  17992.611 - 18111.767:   89.3151%  (       13)
00:10:24.895  18111.767 - 18230.924:   89.4555%  (       15)
00:10:24.895  18230.924 - 18350.080:   89.5865%  (       14)
00:10:24.895  18350.080 - 18469.236:   89.7174%  (       14)
00:10:24.895  18469.236 - 18588.393:   89.8297%  (       12)
00:10:24.895  18588.393 - 18707.549:   89.9420%  (       12)
00:10:24.895  18707.549 - 18826.705:   90.0262%  (        9)
00:10:24.895  18826.705 - 18945.862:   90.1104%  (        9)
00:10:24.895  18945.862 - 19065.018:   90.1946%  (        9)
00:10:24.895  19065.018 - 19184.175:   90.2975%  (       11)
00:10:24.895  19184.175 - 19303.331:   90.3724%  (        8)
00:10:24.895  19303.331 - 19422.487:   90.4659%  (       10)
00:10:24.895  19422.487 - 19541.644:   90.5595%  (       10)
00:10:24.895  19541.644 - 19660.800:   90.6344%  (        8)
00:10:24.895  19660.800 - 19779.956:   90.7279%  (       10)
00:10:24.895  19779.956 - 19899.113:   90.8121%  (        9)
00:10:24.895  19899.113 - 20018.269:   90.9805%  (       18)
00:10:24.895  20018.269 - 20137.425:   91.1770%  (       21)
00:10:24.895  20137.425 - 20256.582:   91.4016%  (       24)
00:10:24.895  20256.582 - 20375.738:   91.6355%  (       25)
00:10:24.895  20375.738 - 20494.895:   91.9068%  (       29)
00:10:24.895  20494.895 - 20614.051:   92.2249%  (       34)
00:10:24.895  20614.051 - 20733.207:   92.5430%  (       34)
00:10:24.895  20733.207 - 20852.364:   92.8424%  (       32)
00:10:24.895  20852.364 - 20971.520:   93.1418%  (       32)
00:10:24.895  20971.520 - 21090.676:   93.4412%  (       32)
00:10:24.895  21090.676 - 21209.833:   93.7406%  (       32)
00:10:24.895  21209.833 - 21328.989:   94.0213%  (       30)
00:10:24.895  21328.989 - 21448.145:   94.3207%  (       32)
00:10:24.895  21448.145 - 21567.302:   94.6201%  (       32)
00:10:24.895  21567.302 - 21686.458:   94.9289%  (       33)
00:10:24.895  21686.458 - 21805.615:   95.2096%  (       30)
00:10:24.895  21805.615 - 21924.771:   95.4996%  (       31)
00:10:24.895  21924.771 - 22043.927:   95.7990%  (       32)
00:10:24.895  22043.927 - 22163.084:   96.0891%  (       31)
00:10:24.895  22163.084 - 22282.240:   96.3698%  (       30)
00:10:24.895  22282.240 - 22401.396:   96.6504%  (       30)
00:10:24.896  22401.396 - 22520.553:   96.9124%  (       28)
00:10:24.896  22520.553 - 22639.709:   97.1557%  (       26)
00:10:24.896  22639.709 - 22758.865:   97.4083%  (       27)
00:10:24.896  22758.865 - 22878.022:   97.6329%  (       24)
00:10:24.896  22878.022 - 22997.178:   97.8293%  (       21)
00:10:24.896  22997.178 - 23116.335:   98.0539%  (       24)
00:10:24.896  23116.335 - 23235.491:   98.2597%  (       22)
00:10:24.896  23235.491 - 23354.647:   98.4656%  (       22)
00:10:24.896  23354.647 - 23473.804:   98.6433%  (       19)
00:10:24.896  23473.804 - 23592.960:   98.7743%  (       14)
00:10:24.896  23592.960 - 23712.116:   98.8772%  (       11)
00:10:24.896  23712.116 - 23831.273:   98.9615%  (        9)
00:10:24.896  23831.273 - 23950.429:   99.0363%  (        8)
00:10:24.896  23950.429 - 24069.585:   99.0924%  (        6)
00:10:24.896  24069.585 - 24188.742:   99.1205%  (        3)
00:10:24.896  24188.742 - 24307.898:   99.1392%  (        2)
00:10:24.896  24307.898 - 24427.055:   99.1579%  (        2)
00:10:24.896  24427.055 - 24546.211:   99.1766%  (        2)
00:10:24.896  24546.211 - 24665.367:   99.1954%  (        2)
00:10:24.896  24665.367 - 24784.524:   99.2421%  (        5)
00:10:24.896  24784.524 - 24903.680:   99.3263%  (        9)
00:10:24.896  24903.680 - 25022.836:   99.4106%  (        9)
00:10:24.896  25022.836 - 25141.993:   99.5041%  (       10)
00:10:24.896  25141.993 - 25261.149:   99.5790%  (        8)
00:10:24.896  25261.149 - 25380.305:   99.6725%  (       10)
00:10:24.896  25380.305 - 25499.462:   99.7754%  (       11)
00:10:24.896  25499.462 - 25618.618:   99.8129%  (        4)
00:10:24.896  25618.618 - 25737.775:   99.8316%  (        2)
00:10:24.896  25737.775 - 25856.931:   99.8503%  (        2)
00:10:24.896  25856.931 - 25976.087:   99.8690%  (        2)
00:10:24.896  25976.087 - 26095.244:   99.8877%  (        2)
00:10:24.896  26095.244 - 26214.400:   99.9064%  (        2)
00:10:24.896  26214.400 - 26333.556:   99.9345%  (        3)
00:10:24.896  26333.556 - 26452.713:   99.9532%  (        2)
00:10:24.896  26452.713 - 26571.869:   99.9719%  (        2)
00:10:24.896  26571.869 - 26691.025:   99.9906%  (        2)
00:10:24.896  26691.025 - 26810.182:  100.0000%  (        1)
00:10:24.896  
00:10:24.896   14:21:03 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:10:25.837  Initializing NVMe Controllers
00:10:25.837  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:10:25.837  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:10:25.837  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:10:25.837  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:10:25.837  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:10:25.837  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:10:25.837  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:10:25.837  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:10:25.837  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:10:25.837  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:10:25.837  Initialization complete. Launching workers.
00:10:25.837  ========================================================
00:10:25.837                                                                             Latency(us)
00:10:25.837  Device Information                     :       IOPS      MiB/s    Average        min        max
00:10:25.837  PCIE (0000:00:10.0) NSID 1 from core  0:    8605.07     100.84   14910.15    9771.83   42456.97
00:10:25.837  PCIE (0000:00:11.0) NSID 1 from core  0:    8605.07     100.84   14884.87    9867.76   41100.43
00:10:25.837  PCIE (0000:00:13.0) NSID 1 from core  0:    8605.07     100.84   14859.42    9925.22   39881.33
00:10:25.837  PCIE (0000:00:12.0) NSID 1 from core  0:    8605.07     100.84   14834.10    9934.51   38590.57
00:10:25.837  PCIE (0000:00:12.0) NSID 2 from core  0:    8605.07     100.84   14808.06    9764.16   37379.24
00:10:25.837  PCIE (0000:00:12.0) NSID 3 from core  0:    8605.07     100.84   14772.48    9936.62   36104.71
00:10:25.837  ========================================================
00:10:25.837  Total                                  :   51630.43     605.04   14844.85    9764.16   42456.97
00:10:25.837  
00:10:25.837  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:25.837  =================================================================================
00:10:25.837    1.00000% : 10068.713us
00:10:25.837   10.00000% : 11021.964us
00:10:25.837   25.00000% : 12034.793us
00:10:25.837   50.00000% : 13702.982us
00:10:25.837   75.00000% : 17277.673us
00:10:25.837   90.00000% : 19899.113us
00:10:25.837   95.00000% : 21328.989us
00:10:25.838   98.00000% : 23831.273us
00:10:25.838   99.00000% : 29789.091us
00:10:25.838   99.50000% : 40274.851us
00:10:25.838   99.90000% : 42181.353us
00:10:25.838   99.99000% : 42657.978us
00:10:25.838   99.99900% : 42657.978us
00:10:25.838   99.99990% : 42657.978us
00:10:25.838   99.99999% : 42657.978us
00:10:25.838  
00:10:25.838  Summary latency data for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:25.838  =================================================================================
00:10:25.838    1.00000% : 10128.291us
00:10:25.838   10.00000% : 11021.964us
00:10:25.838   25.00000% : 12034.793us
00:10:25.838   50.00000% : 13822.138us
00:10:25.838   75.00000% : 17396.829us
00:10:25.838   90.00000% : 19779.956us
00:10:25.838   95.00000% : 21328.989us
00:10:25.838   98.00000% : 23235.491us
00:10:25.838   99.00000% : 29074.153us
00:10:25.838   99.50000% : 39321.600us
00:10:25.838   99.90000% : 40989.789us
00:10:25.838   99.99000% : 41228.102us
00:10:25.838   99.99900% : 41228.102us
00:10:25.838   99.99990% : 41228.102us
00:10:25.838   99.99999% : 41228.102us
00:10:25.838  
00:10:25.838  Summary latency data for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:25.838  =================================================================================
00:10:25.838    1.00000% : 10187.869us
00:10:25.838   10.00000% : 11021.964us
00:10:25.838   25.00000% : 12034.793us
00:10:25.838   50.00000% : 13702.982us
00:10:25.838   75.00000% : 17515.985us
00:10:25.838   90.00000% : 19660.800us
00:10:25.838   95.00000% : 21209.833us
00:10:25.838   98.00000% : 23592.960us
00:10:25.838   99.00000% : 27882.589us
00:10:25.838   99.50000% : 38130.036us
00:10:25.838   99.90000% : 39559.913us
00:10:25.838   99.99000% : 40036.538us
00:10:25.838   99.99900% : 40036.538us
00:10:25.838   99.99990% : 40036.538us
00:10:25.838   99.99999% : 40036.538us
00:10:25.838  
00:10:25.838  Summary latency data for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:25.838  =================================================================================
00:10:25.838    1.00000% : 10247.447us
00:10:25.838   10.00000% : 11021.964us
00:10:25.838   25.00000% : 11975.215us
00:10:25.838   50.00000% : 13822.138us
00:10:25.838   75.00000% : 17396.829us
00:10:25.838   90.00000% : 19779.956us
00:10:25.838   95.00000% : 21090.676us
00:10:25.838   98.00000% : 23473.804us
00:10:25.838   99.00000% : 26691.025us
00:10:25.838   99.50000% : 36700.160us
00:10:25.838   99.90000% : 38368.349us
00:10:25.838   99.99000% : 38606.662us
00:10:25.838   99.99900% : 38606.662us
00:10:25.838   99.99990% : 38606.662us
00:10:25.838   99.99999% : 38606.662us
00:10:25.838  
00:10:25.838  Summary latency data for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:25.838  =================================================================================
00:10:25.838    1.00000% : 10187.869us
00:10:25.838   10.00000% : 10962.385us
00:10:25.838   25.00000% : 11975.215us
00:10:25.838   50.00000% : 13822.138us
00:10:25.838   75.00000% : 17277.673us
00:10:25.838   90.00000% : 19779.956us
00:10:25.838   95.00000% : 21448.145us
00:10:25.838   98.00000% : 23354.647us
00:10:25.838   99.00000% : 25380.305us
00:10:25.838   99.50000% : 35508.596us
00:10:25.838   99.90000% : 37176.785us
00:10:25.838   99.99000% : 37415.098us
00:10:25.838   99.99900% : 37415.098us
00:10:25.838   99.99990% : 37415.098us
00:10:25.838   99.99999% : 37415.098us
00:10:25.838  
00:10:25.838  Summary latency data for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:25.838  =================================================================================
00:10:25.838    1.00000% : 10247.447us
00:10:25.838   10.00000% : 11021.964us
00:10:25.838   25.00000% : 12034.793us
00:10:25.838   50.00000% : 13702.982us
00:10:25.838   75.00000% : 17277.673us
00:10:25.838   90.00000% : 19660.800us
00:10:25.838   95.00000% : 21209.833us
00:10:25.838   98.00000% : 22878.022us
00:10:25.838   99.00000% : 24069.585us
00:10:25.838   99.50000% : 33602.095us
00:10:25.838   99.90000% : 35746.909us
00:10:25.838   99.99000% : 36223.535us
00:10:25.838   99.99900% : 36223.535us
00:10:25.838   99.99990% : 36223.535us
00:10:25.838   99.99999% : 36223.535us
00:10:25.838  
00:10:25.838  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:10:25.838  ==============================================================================
00:10:25.838         Range in us     Cumulative    IO count
00:10:25.838   9770.822 -  9830.400:    0.1273%  (       11)
00:10:25.838   9830.400 -  9889.978:    0.3588%  (       20)
00:10:25.838   9889.978 -  9949.556:    0.6134%  (       22)
00:10:25.838   9949.556 - 10009.135:    0.8796%  (       23)
00:10:25.838  10009.135 - 10068.713:    1.2500%  (       32)
00:10:25.838  10068.713 - 10128.291:    1.6088%  (       31)
00:10:25.838  10128.291 - 10187.869:    1.9907%  (       33)
00:10:25.838  10187.869 - 10247.447:    2.4306%  (       38)
00:10:25.838  10247.447 - 10307.025:    2.8935%  (       40)
00:10:25.838  10307.025 - 10366.604:    3.5648%  (       58)
00:10:25.838  10366.604 - 10426.182:    4.0509%  (       42)
00:10:25.838  10426.182 - 10485.760:    4.7338%  (       59)
00:10:25.838  10485.760 - 10545.338:    5.2315%  (       43)
00:10:25.838  10545.338 - 10604.916:    5.7986%  (       49)
00:10:25.838  10604.916 - 10664.495:    6.4236%  (       54)
00:10:25.838  10664.495 - 10724.073:    6.9444%  (       45)
00:10:25.838  10724.073 - 10783.651:    7.6157%  (       58)
00:10:25.838  10783.651 - 10843.229:    8.2176%  (       52)
00:10:25.838  10843.229 - 10902.807:    9.0509%  (       72)
00:10:25.838  10902.807 - 10962.385:    9.9421%  (       77)
00:10:25.838  10962.385 - 11021.964:   10.9259%  (       85)
00:10:25.838  11021.964 - 11081.542:   11.7824%  (       74)
00:10:25.838  11081.542 - 11141.120:   12.7894%  (       87)
00:10:25.838  11141.120 - 11200.698:   13.5764%  (       68)
00:10:25.838  11200.698 - 11260.276:   14.5718%  (       86)
00:10:25.838  11260.276 - 11319.855:   15.4051%  (       72)
00:10:25.838  11319.855 - 11379.433:   16.2500%  (       73)
00:10:25.838  11379.433 - 11439.011:   17.0139%  (       66)
00:10:25.838  11439.011 - 11498.589:   17.7662%  (       65)
00:10:25.838  11498.589 - 11558.167:   18.4722%  (       61)
00:10:25.838  11558.167 - 11617.745:   19.2361%  (       66)
00:10:25.838  11617.745 - 11677.324:   19.9537%  (       62)
00:10:25.838  11677.324 - 11736.902:   20.7755%  (       71)
00:10:25.838  11736.902 - 11796.480:   21.6435%  (       75)
00:10:25.838  11796.480 - 11856.058:   22.3958%  (       65)
00:10:25.838  11856.058 - 11915.636:   23.2755%  (       76)
00:10:25.838  11915.636 - 11975.215:   24.1782%  (       78)
00:10:25.838  11975.215 - 12034.793:   25.1157%  (       81)
00:10:25.838  12034.793 - 12094.371:   26.1574%  (       90)
00:10:25.838  12094.371 - 12153.949:   27.1296%  (       84)
00:10:25.838  12153.949 - 12213.527:   28.1481%  (       88)
00:10:25.838  12213.527 - 12273.105:   29.0162%  (       75)
00:10:25.838  12273.105 - 12332.684:   29.9074%  (       77)
00:10:25.838  12332.684 - 12392.262:   30.9028%  (       86)
00:10:25.838  12392.262 - 12451.840:   31.9329%  (       89)
00:10:25.838  12451.840 - 12511.418:   32.7894%  (       74)
00:10:25.838  12511.418 - 12570.996:   33.6458%  (       74)
00:10:25.838  12570.996 - 12630.575:   34.5833%  (       81)
00:10:25.838  12630.575 - 12690.153:   35.5440%  (       83)
00:10:25.838  12690.153 - 12749.731:   36.4236%  (       76)
00:10:25.838  12749.731 - 12809.309:   37.3380%  (       79)
00:10:25.838  12809.309 - 12868.887:   38.3333%  (       86)
00:10:25.838  12868.887 - 12928.465:   39.2593%  (       80)
00:10:25.838  12928.465 - 12988.044:   40.0694%  (       70)
00:10:25.838  12988.044 - 13047.622:   40.9838%  (       79)
00:10:25.838  13047.622 - 13107.200:   41.8981%  (       79)
00:10:25.838  13107.200 - 13166.778:   42.8356%  (       81)
00:10:25.838  13166.778 - 13226.356:   43.7616%  (       80)
00:10:25.838  13226.356 - 13285.935:   44.5255%  (       66)
00:10:25.838  13285.935 - 13345.513:   45.4398%  (       79)
00:10:25.838  13345.513 - 13405.091:   46.2037%  (       66)
00:10:25.838  13405.091 - 13464.669:   46.9676%  (       66)
00:10:25.838  13464.669 - 13524.247:   47.8241%  (       74)
00:10:25.838  13524.247 - 13583.825:   48.6227%  (       69)
00:10:25.838  13583.825 - 13643.404:   49.3634%  (       64)
00:10:25.838  13643.404 - 13702.982:   50.1389%  (       67)
00:10:25.838  13702.982 - 13762.560:   50.7870%  (       56)
00:10:25.838  13762.560 - 13822.138:   51.5046%  (       62)
00:10:25.838  13822.138 - 13881.716:   52.1991%  (       60)
00:10:25.838  13881.716 - 13941.295:   52.7662%  (       49)
00:10:25.838  13941.295 - 14000.873:   53.4259%  (       57)
00:10:25.838  14000.873 - 14060.451:   54.0162%  (       51)
00:10:25.838  14060.451 - 14120.029:   54.6065%  (       51)
00:10:25.838  14120.029 - 14179.607:   55.1968%  (       51)
00:10:25.838  14179.607 - 14239.185:   55.8912%  (       60)
00:10:25.838  14239.185 - 14298.764:   56.4352%  (       47)
00:10:25.838  14298.764 - 14358.342:   57.1644%  (       63)
00:10:25.838  14358.342 - 14417.920:   57.7778%  (       53)
00:10:25.838  14417.920 - 14477.498:   58.4144%  (       55)
00:10:25.838  14477.498 - 14537.076:   59.0394%  (       54)
00:10:25.838  14537.076 - 14596.655:   59.6759%  (       55)
00:10:25.838  14596.655 - 14656.233:   60.2315%  (       48)
00:10:25.838  14656.233 - 14715.811:   60.7292%  (       43)
00:10:25.838  14715.811 - 14775.389:   61.2500%  (       45)
00:10:25.838  14775.389 - 14834.967:   61.7014%  (       39)
00:10:25.838  14834.967 - 14894.545:   62.0949%  (       34)
00:10:25.838  14894.545 - 14954.124:   62.5231%  (       37)
00:10:25.838  14954.124 - 15013.702:   62.9861%  (       40)
00:10:25.838  15013.702 - 15073.280:   63.2986%  (       27)
00:10:25.838  15073.280 - 15132.858:   63.6574%  (       31)
00:10:25.838  15132.858 - 15192.436:   63.9699%  (       27)
00:10:25.838  15192.436 - 15252.015:   64.3171%  (       30)
00:10:25.838  15252.015 - 15371.171:   64.8958%  (       50)
00:10:25.838  15371.171 - 15490.327:   65.4051%  (       44)
00:10:25.838  15490.327 - 15609.484:   65.7870%  (       33)
00:10:25.838  15609.484 - 15728.640:   66.2153%  (       37)
00:10:25.838  15728.640 - 15847.796:   66.5625%  (       30)
00:10:25.838  15847.796 - 15966.953:   67.0255%  (       40)
00:10:25.838  15966.953 - 16086.109:   67.4769%  (       39)
00:10:25.838  16086.109 - 16205.265:   67.9977%  (       45)
00:10:25.838  16205.265 - 16324.422:   68.5532%  (       48)
00:10:25.838  16324.422 - 16443.578:   69.2245%  (       58)
00:10:25.838  16443.578 - 16562.735:   70.0000%  (       67)
00:10:25.838  16562.735 - 16681.891:   70.6481%  (       56)
00:10:25.838  16681.891 - 16801.047:   71.4005%  (       65)
00:10:25.838  16801.047 - 16920.204:   72.1528%  (       65)
00:10:25.838  16920.204 - 17039.360:   73.0671%  (       79)
00:10:25.839  17039.360 - 17158.516:   74.0046%  (       81)
00:10:25.839  17158.516 - 17277.673:   75.1042%  (       95)
00:10:25.839  17277.673 - 17396.829:   75.9375%  (       72)
00:10:25.839  17396.829 - 17515.985:   76.7130%  (       67)
00:10:25.839  17515.985 - 17635.142:   77.4769%  (       66)
00:10:25.839  17635.142 - 17754.298:   78.2639%  (       68)
00:10:25.839  17754.298 - 17873.455:   79.1204%  (       74)
00:10:25.839  17873.455 - 17992.611:   79.7801%  (       57)
00:10:25.839  17992.611 - 18111.767:   80.4630%  (       59)
00:10:25.839  18111.767 - 18230.924:   81.2616%  (       69)
00:10:25.839  18230.924 - 18350.080:   82.0255%  (       66)
00:10:25.839  18350.080 - 18469.236:   82.7083%  (       59)
00:10:25.839  18469.236 - 18588.393:   83.2986%  (       51)
00:10:25.839  18588.393 - 18707.549:   83.9352%  (       55)
00:10:25.839  18707.549 - 18826.705:   84.5370%  (       52)
00:10:25.839  18826.705 - 18945.862:   85.1389%  (       52)
00:10:25.839  18945.862 - 19065.018:   85.7639%  (       54)
00:10:25.839  19065.018 - 19184.175:   86.4352%  (       58)
00:10:25.839  19184.175 - 19303.331:   87.1065%  (       58)
00:10:25.839  19303.331 - 19422.487:   87.7546%  (       56)
00:10:25.839  19422.487 - 19541.644:   88.2986%  (       47)
00:10:25.839  19541.644 - 19660.800:   88.9352%  (       55)
00:10:25.839  19660.800 - 19779.956:   89.4676%  (       46)
00:10:25.839  19779.956 - 19899.113:   90.0579%  (       51)
00:10:25.839  19899.113 - 20018.269:   90.5671%  (       44)
00:10:25.839  20018.269 - 20137.425:   91.0532%  (       42)
00:10:25.839  20137.425 - 20256.582:   91.4005%  (       30)
00:10:25.839  20256.582 - 20375.738:   91.8056%  (       35)
00:10:25.839  20375.738 - 20494.895:   92.3032%  (       43)
00:10:25.839  20494.895 - 20614.051:   92.6968%  (       34)
00:10:25.839  20614.051 - 20733.207:   93.1250%  (       37)
00:10:25.839  20733.207 - 20852.364:   93.6343%  (       44)
00:10:25.839  20852.364 - 20971.520:   94.0509%  (       36)
00:10:25.839  20971.520 - 21090.676:   94.4097%  (       31)
00:10:25.839  21090.676 - 21209.833:   94.7338%  (       28)
00:10:25.839  21209.833 - 21328.989:   95.0926%  (       31)
00:10:25.839  21328.989 - 21448.145:   95.3588%  (       23)
00:10:25.839  21448.145 - 21567.302:   95.5787%  (       19)
00:10:25.839  21567.302 - 21686.458:   95.9028%  (       28)
00:10:25.839  21686.458 - 21805.615:   96.0648%  (       14)
00:10:25.839  21805.615 - 21924.771:   96.2269%  (       14)
00:10:25.839  21924.771 - 22043.927:   96.3194%  (        8)
00:10:25.839  22043.927 - 22163.084:   96.4120%  (        8)
00:10:25.839  22163.084 - 22282.240:   96.4699%  (        5)
00:10:25.839  22282.240 - 22401.396:   96.5741%  (        9)
00:10:25.839  22401.396 - 22520.553:   96.6898%  (       10)
00:10:25.839  22520.553 - 22639.709:   96.8287%  (       12)
00:10:25.839  22639.709 - 22758.865:   96.9213%  (        8)
00:10:25.839  22758.865 - 22878.022:   97.0486%  (       11)
00:10:25.839  22878.022 - 22997.178:   97.1875%  (       12)
00:10:25.839  22997.178 - 23116.335:   97.3148%  (       11)
00:10:25.839  23116.335 - 23235.491:   97.4306%  (       10)
00:10:25.839  23235.491 - 23354.647:   97.5579%  (       11)
00:10:25.839  23354.647 - 23473.804:   97.6968%  (       12)
00:10:25.839  23473.804 - 23592.960:   97.7894%  (        8)
00:10:25.839  23592.960 - 23712.116:   97.9282%  (       12)
00:10:25.839  23712.116 - 23831.273:   98.0556%  (       11)
00:10:25.839  23831.273 - 23950.429:   98.1597%  (        9)
00:10:25.839  23950.429 - 24069.585:   98.2292%  (        6)
00:10:25.839  24069.585 - 24188.742:   98.2870%  (        5)
00:10:25.839  24188.742 - 24307.898:   98.3681%  (        7)
00:10:25.839  24307.898 - 24427.055:   98.4028%  (        3)
00:10:25.839  24427.055 - 24546.211:   98.4491%  (        4)
00:10:25.839  24546.211 - 24665.367:   98.4954%  (        4)
00:10:25.839  24665.367 - 24784.524:   98.5185%  (        2)
00:10:25.839  27763.433 - 27882.589:   98.5880%  (        6)
00:10:25.839  27882.589 - 28001.745:   98.6343%  (        4)
00:10:25.839  28001.745 - 28120.902:   98.6458%  (        1)
00:10:25.839  28120.902 - 28240.058:   98.6806%  (        3)
00:10:25.839  28240.058 - 28359.215:   98.7037%  (        2)
00:10:25.839  28359.215 - 28478.371:   98.7269%  (        2)
00:10:25.839  28478.371 - 28597.527:   98.7500%  (        2)
00:10:25.839  28597.527 - 28716.684:   98.7847%  (        3)
00:10:25.839  28716.684 - 28835.840:   98.7963%  (        1)
00:10:25.839  28835.840 - 28954.996:   98.8310%  (        3)
00:10:25.839  28954.996 - 29074.153:   98.8542%  (        2)
00:10:25.839  29074.153 - 29193.309:   98.8889%  (        3)
00:10:25.839  29193.309 - 29312.465:   98.9005%  (        1)
00:10:25.839  29312.465 - 29431.622:   98.9352%  (        3)
00:10:25.839  29431.622 - 29550.778:   98.9699%  (        3)
00:10:25.839  29550.778 - 29669.935:   98.9931%  (        2)
00:10:25.839  29669.935 - 29789.091:   99.0278%  (        3)
00:10:25.839  29789.091 - 29908.247:   99.0625%  (        3)
00:10:25.839  29908.247 - 30027.404:   99.0856%  (        2)
00:10:25.839  30027.404 - 30146.560:   99.1204%  (        3)
00:10:25.839  30146.560 - 30265.716:   99.1551%  (        3)
00:10:25.839  30265.716 - 30384.873:   99.1782%  (        2)
00:10:25.839  30384.873 - 30504.029:   99.2014%  (        2)
00:10:25.839  30504.029 - 30742.342:   99.2593%  (        5)
00:10:25.839  39083.287 - 39321.600:   99.3056%  (        4)
00:10:25.839  39321.600 - 39559.913:   99.3519%  (        4)
00:10:25.839  39559.913 - 39798.225:   99.4213%  (        6)
00:10:25.839  39798.225 - 40036.538:   99.4560%  (        3)
00:10:25.839  40036.538 - 40274.851:   99.5023%  (        4)
00:10:25.839  40274.851 - 40513.164:   99.5486%  (        4)
00:10:25.839  40513.164 - 40751.476:   99.6065%  (        5)
00:10:25.839  40751.476 - 40989.789:   99.6644%  (        5)
00:10:25.839  40989.789 - 41228.102:   99.7222%  (        5)
00:10:25.839  41228.102 - 41466.415:   99.7685%  (        4)
00:10:25.839  41466.415 - 41704.727:   99.8264%  (        5)
00:10:25.839  41704.727 - 41943.040:   99.8727%  (        4)
00:10:25.839  41943.040 - 42181.353:   99.9306%  (        5)
00:10:25.839  42181.353 - 42419.665:   99.9769%  (        4)
00:10:25.839  42419.665 - 42657.978:  100.0000%  (        2)
00:10:25.839  
00:10:25.839  Latency histogram for PCIE (0000:00:11.0) NSID 1                  from core 0:
00:10:25.839  ==============================================================================
00:10:25.839         Range in us     Cumulative    IO count
00:10:25.839   9830.400 -  9889.978:    0.0694%  (        6)
00:10:25.839   9889.978 -  9949.556:    0.2546%  (       16)
00:10:25.839   9949.556 - 10009.135:    0.5440%  (       25)
00:10:25.839  10009.135 - 10068.713:    0.7986%  (       22)
00:10:25.839  10068.713 - 10128.291:    1.0648%  (       23)
00:10:25.839  10128.291 - 10187.869:    1.6088%  (       47)
00:10:25.839  10187.869 - 10247.447:    2.1759%  (       49)
00:10:25.839  10247.447 - 10307.025:    2.6273%  (       39)
00:10:25.839  10307.025 - 10366.604:    3.0440%  (       36)
00:10:25.839  10366.604 - 10426.182:    3.6690%  (       54)
00:10:25.839  10426.182 - 10485.760:    4.2361%  (       49)
00:10:25.839  10485.760 - 10545.338:    4.8611%  (       54)
00:10:25.839  10545.338 - 10604.916:    5.5671%  (       61)
00:10:25.839  10604.916 - 10664.495:    6.1921%  (       54)
00:10:25.839  10664.495 - 10724.073:    6.8750%  (       59)
00:10:25.839  10724.073 - 10783.651:    7.5231%  (       56)
00:10:25.839  10783.651 - 10843.229:    8.2986%  (       67)
00:10:25.839  10843.229 - 10902.807:    9.0394%  (       64)
00:10:25.839  10902.807 - 10962.385:    9.8032%  (       66)
00:10:25.839  10962.385 - 11021.964:   10.6713%  (       75)
00:10:25.839  11021.964 - 11081.542:   11.4931%  (       71)
00:10:25.839  11081.542 - 11141.120:   12.3611%  (       75)
00:10:25.839  11141.120 - 11200.698:   13.3218%  (       83)
00:10:25.839  11200.698 - 11260.276:   14.2130%  (       77)
00:10:25.839  11260.276 - 11319.855:   15.0463%  (       72)
00:10:25.839  11319.855 - 11379.433:   15.8796%  (       72)
00:10:25.839  11379.433 - 11439.011:   16.9329%  (       91)
00:10:25.839  11439.011 - 11498.589:   17.8241%  (       77)
00:10:25.839  11498.589 - 11558.167:   18.5648%  (       64)
00:10:25.839  11558.167 - 11617.745:   19.4213%  (       74)
00:10:25.839  11617.745 - 11677.324:   20.2778%  (       74)
00:10:25.839  11677.324 - 11736.902:   21.0764%  (       69)
00:10:25.839  11736.902 - 11796.480:   21.8981%  (       71)
00:10:25.839  11796.480 - 11856.058:   22.6389%  (       64)
00:10:25.839  11856.058 - 11915.636:   23.4375%  (       69)
00:10:25.839  11915.636 - 11975.215:   24.3171%  (       76)
00:10:25.839  11975.215 - 12034.793:   25.2431%  (       80)
00:10:25.839  12034.793 - 12094.371:   26.3542%  (       96)
00:10:25.839  12094.371 - 12153.949:   27.2222%  (       75)
00:10:25.839  12153.949 - 12213.527:   28.2639%  (       90)
00:10:25.839  12213.527 - 12273.105:   29.1782%  (       79)
00:10:25.839  12273.105 - 12332.684:   30.0694%  (       77)
00:10:25.839  12332.684 - 12392.262:   30.9722%  (       78)
00:10:25.839  12392.262 - 12451.840:   31.8981%  (       80)
00:10:25.839  12451.840 - 12511.418:   32.7199%  (       71)
00:10:25.839  12511.418 - 12570.996:   33.5648%  (       73)
00:10:25.839  12570.996 - 12630.575:   34.3403%  (       67)
00:10:25.839  12630.575 - 12690.153:   35.1042%  (       66)
00:10:25.839  12690.153 - 12749.731:   35.8333%  (       63)
00:10:25.839  12749.731 - 12809.309:   36.6551%  (       71)
00:10:25.839  12809.309 - 12868.887:   37.5231%  (       75)
00:10:25.839  12868.887 - 12928.465:   38.4491%  (       80)
00:10:25.839  12928.465 - 12988.044:   39.3634%  (       79)
00:10:25.839  12988.044 - 13047.622:   40.1620%  (       69)
00:10:25.839  13047.622 - 13107.200:   41.0069%  (       73)
00:10:25.839  13107.200 - 13166.778:   41.8403%  (       72)
00:10:25.839  13166.778 - 13226.356:   42.6505%  (       70)
00:10:25.839  13226.356 - 13285.935:   43.3912%  (       64)
00:10:25.839  13285.935 - 13345.513:   44.1551%  (       66)
00:10:25.839  13345.513 - 13405.091:   44.9537%  (       69)
00:10:25.839  13405.091 - 13464.669:   45.7870%  (       72)
00:10:25.839  13464.669 - 13524.247:   46.5625%  (       67)
00:10:25.839  13524.247 - 13583.825:   47.4190%  (       74)
00:10:25.839  13583.825 - 13643.404:   48.1481%  (       63)
00:10:25.839  13643.404 - 13702.982:   48.8773%  (       63)
00:10:25.839  13702.982 - 13762.560:   49.4907%  (       53)
00:10:25.839  13762.560 - 13822.138:   50.2199%  (       63)
00:10:25.839  13822.138 - 13881.716:   50.9606%  (       64)
00:10:25.839  13881.716 - 13941.295:   51.9097%  (       82)
00:10:25.839  13941.295 - 14000.873:   52.6505%  (       64)
00:10:25.839  14000.873 - 14060.451:   53.3449%  (       60)
00:10:25.839  14060.451 - 14120.029:   54.0509%  (       61)
00:10:25.839  14120.029 - 14179.607:   54.7685%  (       62)
00:10:25.839  14179.607 - 14239.185:   55.4745%  (       61)
00:10:25.839  14239.185 - 14298.764:   56.2153%  (       64)
00:10:25.839  14298.764 - 14358.342:   56.8750%  (       57)
00:10:25.840  14358.342 - 14417.920:   57.5810%  (       61)
00:10:25.840  14417.920 - 14477.498:   58.2292%  (       56)
00:10:25.840  14477.498 - 14537.076:   59.0278%  (       69)
00:10:25.840  14537.076 - 14596.655:   59.6875%  (       57)
00:10:25.840  14596.655 - 14656.233:   60.3935%  (       61)
00:10:25.840  14656.233 - 14715.811:   61.0069%  (       53)
00:10:25.840  14715.811 - 14775.389:   61.6088%  (       52)
00:10:25.840  14775.389 - 14834.967:   62.2222%  (       53)
00:10:25.840  14834.967 - 14894.545:   62.9167%  (       60)
00:10:25.840  14894.545 - 14954.124:   63.5417%  (       54)
00:10:25.840  14954.124 - 15013.702:   64.0278%  (       42)
00:10:25.840  15013.702 - 15073.280:   64.6065%  (       50)
00:10:25.840  15073.280 - 15132.858:   65.0579%  (       39)
00:10:25.840  15132.858 - 15192.436:   65.4282%  (       32)
00:10:25.840  15192.436 - 15252.015:   65.7639%  (       29)
00:10:25.840  15252.015 - 15371.171:   66.4583%  (       60)
00:10:25.840  15371.171 - 15490.327:   67.1412%  (       59)
00:10:25.840  15490.327 - 15609.484:   67.8356%  (       60)
00:10:25.840  15609.484 - 15728.640:   68.3218%  (       42)
00:10:25.840  15728.640 - 15847.796:   68.6343%  (       27)
00:10:25.840  15847.796 - 15966.953:   68.8657%  (       20)
00:10:25.840  15966.953 - 16086.109:   69.2477%  (       33)
00:10:25.840  16086.109 - 16205.265:   69.4792%  (       20)
00:10:25.840  16205.265 - 16324.422:   69.9884%  (       44)
00:10:25.840  16324.422 - 16443.578:   70.6829%  (       60)
00:10:25.840  16443.578 - 16562.735:   71.2847%  (       52)
00:10:25.840  16562.735 - 16681.891:   71.8634%  (       50)
00:10:25.840  16681.891 - 16801.047:   72.3843%  (       45)
00:10:25.840  16801.047 - 16920.204:   72.9051%  (       45)
00:10:25.840  16920.204 - 17039.360:   73.3912%  (       42)
00:10:25.840  17039.360 - 17158.516:   73.8542%  (       40)
00:10:25.840  17158.516 - 17277.673:   74.4907%  (       55)
00:10:25.840  17277.673 - 17396.829:   75.0926%  (       52)
00:10:25.840  17396.829 - 17515.985:   75.6829%  (       51)
00:10:25.840  17515.985 - 17635.142:   76.1574%  (       41)
00:10:25.840  17635.142 - 17754.298:   76.6551%  (       43)
00:10:25.840  17754.298 - 17873.455:   77.1181%  (       40)
00:10:25.840  17873.455 - 17992.611:   77.6852%  (       49)
00:10:25.840  17992.611 - 18111.767:   78.4375%  (       65)
00:10:25.840  18111.767 - 18230.924:   79.2940%  (       74)
00:10:25.840  18230.924 - 18350.080:   80.0810%  (       68)
00:10:25.840  18350.080 - 18469.236:   80.9722%  (       77)
00:10:25.840  18469.236 - 18588.393:   81.9792%  (       87)
00:10:25.840  18588.393 - 18707.549:   82.8356%  (       74)
00:10:25.840  18707.549 - 18826.705:   83.7384%  (       78)
00:10:25.840  18826.705 - 18945.862:   84.7222%  (       85)
00:10:25.840  18945.862 - 19065.018:   85.6250%  (       78)
00:10:25.840  19065.018 - 19184.175:   86.5046%  (       76)
00:10:25.840  19184.175 - 19303.331:   87.3727%  (       75)
00:10:25.840  19303.331 - 19422.487:   88.1134%  (       64)
00:10:25.840  19422.487 - 19541.644:   88.7847%  (       58)
00:10:25.840  19541.644 - 19660.800:   89.5139%  (       63)
00:10:25.840  19660.800 - 19779.956:   90.2199%  (       61)
00:10:25.840  19779.956 - 19899.113:   90.8912%  (       58)
00:10:25.840  19899.113 - 20018.269:   91.5509%  (       57)
00:10:25.840  20018.269 - 20137.425:   92.0139%  (       40)
00:10:25.840  20137.425 - 20256.582:   92.5116%  (       43)
00:10:25.840  20256.582 - 20375.738:   92.9398%  (       37)
00:10:25.840  20375.738 - 20494.895:   93.3565%  (       36)
00:10:25.840  20494.895 - 20614.051:   93.6690%  (       27)
00:10:25.840  20614.051 - 20733.207:   93.9468%  (       24)
00:10:25.840  20733.207 - 20852.364:   94.2130%  (       23)
00:10:25.840  20852.364 - 20971.520:   94.4213%  (       18)
00:10:25.840  20971.520 - 21090.676:   94.6875%  (       23)
00:10:25.840  21090.676 - 21209.833:   94.8148%  (       11)
00:10:25.840  21209.833 - 21328.989:   95.0810%  (       23)
00:10:25.840  21328.989 - 21448.145:   95.3356%  (       22)
00:10:25.840  21448.145 - 21567.302:   95.5208%  (       16)
00:10:25.840  21567.302 - 21686.458:   95.7176%  (       17)
00:10:25.840  21686.458 - 21805.615:   95.9491%  (       20)
00:10:25.840  21805.615 - 21924.771:   96.1921%  (       21)
00:10:25.840  21924.771 - 22043.927:   96.4468%  (       22)
00:10:25.840  22043.927 - 22163.084:   96.6782%  (       20)
00:10:25.840  22163.084 - 22282.240:   96.8750%  (       17)
00:10:25.840  22282.240 - 22401.396:   97.0255%  (       13)
00:10:25.840  22401.396 - 22520.553:   97.1644%  (       12)
00:10:25.840  22520.553 - 22639.709:   97.3264%  (       14)
00:10:25.840  22639.709 - 22758.865:   97.4769%  (       13)
00:10:25.840  22758.865 - 22878.022:   97.6273%  (       13)
00:10:25.840  22878.022 - 22997.178:   97.7778%  (       13)
00:10:25.840  22997.178 - 23116.335:   97.9167%  (       12)
00:10:25.840  23116.335 - 23235.491:   98.0440%  (       11)
00:10:25.840  23235.491 - 23354.647:   98.1481%  (        9)
00:10:25.840  23354.647 - 23473.804:   98.2407%  (        8)
00:10:25.840  23473.804 - 23592.960:   98.3333%  (        8)
00:10:25.840  23592.960 - 23712.116:   98.4028%  (        6)
00:10:25.840  23712.116 - 23831.273:   98.4491%  (        4)
00:10:25.840  23831.273 - 23950.429:   98.4838%  (        3)
00:10:25.840  23950.429 - 24069.585:   98.5185%  (        3)
00:10:25.840  27286.807 - 27405.964:   98.5764%  (        5)
00:10:25.840  27405.964 - 27525.120:   98.6343%  (        5)
00:10:25.840  27525.120 - 27644.276:   98.6690%  (        3)
00:10:25.840  27644.276 - 27763.433:   98.6806%  (        1)
00:10:25.840  27763.433 - 27882.589:   98.7037%  (        2)
00:10:25.840  27882.589 - 28001.745:   98.7384%  (        3)
00:10:25.840  28001.745 - 28120.902:   98.7616%  (        2)
00:10:25.840  28120.902 - 28240.058:   98.7963%  (        3)
00:10:25.840  28240.058 - 28359.215:   98.8310%  (        3)
00:10:25.840  28359.215 - 28478.371:   98.8542%  (        2)
00:10:25.840  28478.371 - 28597.527:   98.8889%  (        3)
00:10:25.840  28597.527 - 28716.684:   98.9120%  (        2)
00:10:25.840  28716.684 - 28835.840:   98.9468%  (        3)
00:10:25.840  28835.840 - 28954.996:   98.9815%  (        3)
00:10:25.840  28954.996 - 29074.153:   99.0046%  (        2)
00:10:25.840  29074.153 - 29193.309:   99.0394%  (        3)
00:10:25.840  29193.309 - 29312.465:   99.0741%  (        3)
00:10:25.840  29312.465 - 29431.622:   99.0972%  (        2)
00:10:25.840  29431.622 - 29550.778:   99.1319%  (        3)
00:10:25.840  29550.778 - 29669.935:   99.1667%  (        3)
00:10:25.840  29669.935 - 29789.091:   99.1898%  (        2)
00:10:25.840  29789.091 - 29908.247:   99.2245%  (        3)
00:10:25.840  29908.247 - 30027.404:   99.2593%  (        3)
00:10:25.840  38130.036 - 38368.349:   99.2940%  (        3)
00:10:25.840  38368.349 - 38606.662:   99.3519%  (        5)
00:10:25.840  38606.662 - 38844.975:   99.4213%  (        6)
00:10:25.840  38844.975 - 39083.287:   99.4792%  (        5)
00:10:25.840  39083.287 - 39321.600:   99.5486%  (        6)
00:10:25.840  39321.600 - 39559.913:   99.6065%  (        5)
00:10:25.840  39559.913 - 39798.225:   99.6644%  (        5)
00:10:25.840  39798.225 - 40036.538:   99.7222%  (        5)
00:10:25.840  40036.538 - 40274.851:   99.7801%  (        5)
00:10:25.840  40274.851 - 40513.164:   99.8380%  (        5)
00:10:25.840  40513.164 - 40751.476:   99.8958%  (        5)
00:10:25.840  40751.476 - 40989.789:   99.9653%  (        6)
00:10:25.840  40989.789 - 41228.102:  100.0000%  (        3)
00:10:25.840  
00:10:25.840  Latency histogram for PCIE (0000:00:13.0) NSID 1                  from core 0:
00:10:25.840  ==============================================================================
00:10:25.840         Range in us     Cumulative    IO count
00:10:25.840   9889.978 -  9949.556:    0.0694%  (        6)
00:10:25.840   9949.556 - 10009.135:    0.3356%  (       23)
00:10:25.840  10009.135 - 10068.713:    0.5671%  (       20)
00:10:25.840  10068.713 - 10128.291:    0.9722%  (       35)
00:10:25.840  10128.291 - 10187.869:    1.3079%  (       29)
00:10:25.840  10187.869 - 10247.447:    1.8287%  (       45)
00:10:25.840  10247.447 - 10307.025:    2.3032%  (       41)
00:10:25.840  10307.025 - 10366.604:    2.8704%  (       49)
00:10:25.840  10366.604 - 10426.182:    3.3912%  (       45)
00:10:25.840  10426.182 - 10485.760:    3.9352%  (       47)
00:10:25.840  10485.760 - 10545.338:    4.4560%  (       45)
00:10:25.840  10545.338 - 10604.916:    5.1389%  (       59)
00:10:25.840  10604.916 - 10664.495:    5.8333%  (       60)
00:10:25.840  10664.495 - 10724.073:    6.5162%  (       59)
00:10:25.840  10724.073 - 10783.651:    7.1759%  (       57)
00:10:25.840  10783.651 - 10843.229:    8.0093%  (       72)
00:10:25.840  10843.229 - 10902.807:    8.9005%  (       77)
00:10:25.840  10902.807 - 10962.385:    9.7222%  (       71)
00:10:25.840  10962.385 - 11021.964:   10.7407%  (       88)
00:10:25.840  11021.964 - 11081.542:   11.7361%  (       86)
00:10:25.840  11081.542 - 11141.120:   12.5463%  (       70)
00:10:25.840  11141.120 - 11200.698:   13.2639%  (       62)
00:10:25.840  11200.698 - 11260.276:   13.9352%  (       58)
00:10:25.840  11260.276 - 11319.855:   14.5833%  (       56)
00:10:25.840  11319.855 - 11379.433:   15.4398%  (       74)
00:10:25.840  11379.433 - 11439.011:   16.2153%  (       67)
00:10:25.840  11439.011 - 11498.589:   17.0602%  (       73)
00:10:25.840  11498.589 - 11558.167:   17.8819%  (       71)
00:10:25.840  11558.167 - 11617.745:   18.8194%  (       81)
00:10:25.840  11617.745 - 11677.324:   19.8495%  (       89)
00:10:25.840  11677.324 - 11736.902:   20.9722%  (       97)
00:10:25.840  11736.902 - 11796.480:   21.9329%  (       83)
00:10:25.840  11796.480 - 11856.058:   22.7199%  (       68)
00:10:25.840  11856.058 - 11915.636:   23.5069%  (       68)
00:10:25.840  11915.636 - 11975.215:   24.5718%  (       92)
00:10:25.840  11975.215 - 12034.793:   25.5556%  (       85)
00:10:25.840  12034.793 - 12094.371:   26.4468%  (       77)
00:10:25.840  12094.371 - 12153.949:   27.2222%  (       67)
00:10:25.840  12153.949 - 12213.527:   28.0787%  (       74)
00:10:25.840  12213.527 - 12273.105:   29.0162%  (       81)
00:10:25.840  12273.105 - 12332.684:   30.0463%  (       89)
00:10:25.840  12332.684 - 12392.262:   31.0069%  (       83)
00:10:25.840  12392.262 - 12451.840:   31.9792%  (       84)
00:10:25.840  12451.840 - 12511.418:   32.9977%  (       88)
00:10:25.840  12511.418 - 12570.996:   34.0394%  (       90)
00:10:25.840  12570.996 - 12630.575:   35.0926%  (       91)
00:10:25.840  12630.575 - 12690.153:   36.1343%  (       90)
00:10:25.840  12690.153 - 12749.731:   37.0949%  (       83)
00:10:25.840  12749.731 - 12809.309:   38.0440%  (       82)
00:10:25.840  12809.309 - 12868.887:   39.0509%  (       87)
00:10:25.840  12868.887 - 12928.465:   39.7801%  (       63)
00:10:25.840  12928.465 - 12988.044:   40.6250%  (       73)
00:10:25.840  12988.044 - 13047.622:   41.3657%  (       64)
00:10:25.840  13047.622 - 13107.200:   42.2338%  (       75)
00:10:25.840  13107.200 - 13166.778:   43.0440%  (       70)
00:10:25.840  13166.778 - 13226.356:   43.8310%  (       68)
00:10:25.840  13226.356 - 13285.935:   44.6412%  (       70)
00:10:25.840  13285.935 - 13345.513:   45.3241%  (       59)
00:10:25.840  13345.513 - 13405.091:   46.1227%  (       69)
00:10:25.840  13405.091 - 13464.669:   47.0139%  (       77)
00:10:25.841  13464.669 - 13524.247:   47.8935%  (       76)
00:10:25.841  13524.247 - 13583.825:   48.7616%  (       75)
00:10:25.841  13583.825 - 13643.404:   49.5718%  (       70)
00:10:25.841  13643.404 - 13702.982:   50.3935%  (       71)
00:10:25.841  13702.982 - 13762.560:   51.2269%  (       72)
00:10:25.841  13762.560 - 13822.138:   52.0139%  (       68)
00:10:25.841  13822.138 - 13881.716:   52.7778%  (       66)
00:10:25.841  13881.716 - 13941.295:   53.4606%  (       59)
00:10:25.841  13941.295 - 14000.873:   54.0046%  (       47)
00:10:25.841  14000.873 - 14060.451:   54.7222%  (       62)
00:10:25.841  14060.451 - 14120.029:   55.3241%  (       52)
00:10:25.841  14120.029 - 14179.607:   55.9491%  (       54)
00:10:25.841  14179.607 - 14239.185:   56.6435%  (       60)
00:10:25.841  14239.185 - 14298.764:   57.2222%  (       50)
00:10:25.841  14298.764 - 14358.342:   57.8356%  (       53)
00:10:25.841  14358.342 - 14417.920:   58.3681%  (       46)
00:10:25.841  14417.920 - 14477.498:   58.9699%  (       52)
00:10:25.841  14477.498 - 14537.076:   59.5139%  (       47)
00:10:25.841  14537.076 - 14596.655:   59.9537%  (       38)
00:10:25.841  14596.655 - 14656.233:   60.4051%  (       39)
00:10:25.841  14656.233 - 14715.811:   60.8681%  (       40)
00:10:25.841  14715.811 - 14775.389:   61.3079%  (       38)
00:10:25.841  14775.389 - 14834.967:   61.7708%  (       40)
00:10:25.841  14834.967 - 14894.545:   62.1412%  (       32)
00:10:25.841  14894.545 - 14954.124:   62.6273%  (       42)
00:10:25.841  14954.124 - 15013.702:   63.0787%  (       39)
00:10:25.841  15013.702 - 15073.280:   63.4838%  (       35)
00:10:25.841  15073.280 - 15132.858:   63.8889%  (       35)
00:10:25.841  15132.858 - 15192.436:   64.2014%  (       27)
00:10:25.841  15192.436 - 15252.015:   64.4676%  (       23)
00:10:25.841  15252.015 - 15371.171:   64.9884%  (       45)
00:10:25.841  15371.171 - 15490.327:   65.5208%  (       46)
00:10:25.841  15490.327 - 15609.484:   66.0764%  (       48)
00:10:25.841  15609.484 - 15728.640:   66.6782%  (       52)
00:10:25.841  15728.640 - 15847.796:   67.3843%  (       61)
00:10:25.841  15847.796 - 15966.953:   67.9051%  (       45)
00:10:25.841  15966.953 - 16086.109:   68.3102%  (       35)
00:10:25.841  16086.109 - 16205.265:   68.6574%  (       30)
00:10:25.841  16205.265 - 16324.422:   68.9583%  (       26)
00:10:25.841  16324.422 - 16443.578:   69.3634%  (       35)
00:10:25.841  16443.578 - 16562.735:   69.8264%  (       40)
00:10:25.841  16562.735 - 16681.891:   70.5093%  (       59)
00:10:25.841  16681.891 - 16801.047:   70.9954%  (       42)
00:10:25.841  16801.047 - 16920.204:   71.5046%  (       44)
00:10:25.841  16920.204 - 17039.360:   72.3148%  (       70)
00:10:25.841  17039.360 - 17158.516:   73.2176%  (       78)
00:10:25.841  17158.516 - 17277.673:   74.0278%  (       70)
00:10:25.841  17277.673 - 17396.829:   74.6759%  (       56)
00:10:25.841  17396.829 - 17515.985:   75.2662%  (       51)
00:10:25.841  17515.985 - 17635.142:   75.8449%  (       50)
00:10:25.841  17635.142 - 17754.298:   76.5741%  (       63)
00:10:25.841  17754.298 - 17873.455:   77.3843%  (       70)
00:10:25.841  17873.455 - 17992.611:   78.3796%  (       86)
00:10:25.841  17992.611 - 18111.767:   79.2130%  (       72)
00:10:25.841  18111.767 - 18230.924:   79.9074%  (       60)
00:10:25.841  18230.924 - 18350.080:   80.7523%  (       73)
00:10:25.841  18350.080 - 18469.236:   81.7361%  (       85)
00:10:25.841  18469.236 - 18588.393:   82.7662%  (       89)
00:10:25.841  18588.393 - 18707.549:   83.7616%  (       86)
00:10:25.841  18707.549 - 18826.705:   84.7454%  (       85)
00:10:25.841  18826.705 - 18945.862:   85.6944%  (       82)
00:10:25.841  18945.862 - 19065.018:   86.6667%  (       84)
00:10:25.841  19065.018 - 19184.175:   87.5810%  (       79)
00:10:25.841  19184.175 - 19303.331:   88.5185%  (       81)
00:10:25.841  19303.331 - 19422.487:   89.2130%  (       60)
00:10:25.841  19422.487 - 19541.644:   89.8148%  (       52)
00:10:25.841  19541.644 - 19660.800:   90.4282%  (       53)
00:10:25.841  19660.800 - 19779.956:   90.9838%  (       48)
00:10:25.841  19779.956 - 19899.113:   91.5625%  (       50)
00:10:25.841  19899.113 - 20018.269:   92.1528%  (       51)
00:10:25.841  20018.269 - 20137.425:   92.6968%  (       47)
00:10:25.841  20137.425 - 20256.582:   93.1019%  (       35)
00:10:25.841  20256.582 - 20375.738:   93.4606%  (       31)
00:10:25.841  20375.738 - 20494.895:   93.7269%  (       23)
00:10:25.841  20494.895 - 20614.051:   94.0278%  (       26)
00:10:25.841  20614.051 - 20733.207:   94.2593%  (       20)
00:10:25.841  20733.207 - 20852.364:   94.4676%  (       18)
00:10:25.841  20852.364 - 20971.520:   94.6644%  (       17)
00:10:25.841  20971.520 - 21090.676:   94.8611%  (       17)
00:10:25.841  21090.676 - 21209.833:   95.0463%  (       16)
00:10:25.841  21209.833 - 21328.989:   95.2662%  (       19)
00:10:25.841  21328.989 - 21448.145:   95.4282%  (       14)
00:10:25.841  21448.145 - 21567.302:   95.5787%  (       13)
00:10:25.841  21567.302 - 21686.458:   95.7176%  (       12)
00:10:25.841  21686.458 - 21805.615:   95.8796%  (       14)
00:10:25.841  21805.615 - 21924.771:   96.0880%  (       18)
00:10:25.841  21924.771 - 22043.927:   96.2384%  (       13)
00:10:25.841  22043.927 - 22163.084:   96.4236%  (       16)
00:10:25.841  22163.084 - 22282.240:   96.5972%  (       15)
00:10:25.841  22282.240 - 22401.396:   96.7824%  (       16)
00:10:25.841  22401.396 - 22520.553:   96.9676%  (       16)
00:10:25.841  22520.553 - 22639.709:   97.1296%  (       14)
00:10:25.841  22639.709 - 22758.865:   97.3380%  (       18)
00:10:25.841  22758.865 - 22878.022:   97.4769%  (       12)
00:10:25.841  22878.022 - 22997.178:   97.5926%  (       10)
00:10:25.841  22997.178 - 23116.335:   97.6968%  (        9)
00:10:25.841  23116.335 - 23235.491:   97.8009%  (        9)
00:10:25.841  23235.491 - 23354.647:   97.9051%  (        9)
00:10:25.841  23354.647 - 23473.804:   97.9861%  (        7)
00:10:25.841  23473.804 - 23592.960:   98.0671%  (        7)
00:10:25.841  23592.960 - 23712.116:   98.1250%  (        5)
00:10:25.841  23712.116 - 23831.273:   98.1713%  (        4)
00:10:25.841  23831.273 - 23950.429:   98.2292%  (        5)
00:10:25.841  23950.429 - 24069.585:   98.2986%  (        6)
00:10:25.841  24069.585 - 24188.742:   98.3565%  (        5)
00:10:25.841  24188.742 - 24307.898:   98.4259%  (        6)
00:10:25.841  24307.898 - 24427.055:   98.4722%  (        4)
00:10:25.841  24427.055 - 24546.211:   98.5185%  (        4)
00:10:25.841  25856.931 - 25976.087:   98.5301%  (        1)
00:10:25.841  25976.087 - 26095.244:   98.5532%  (        2)
00:10:25.841  26095.244 - 26214.400:   98.5880%  (        3)
00:10:25.841  26214.400 - 26333.556:   98.6227%  (        3)
00:10:25.841  26333.556 - 26452.713:   98.6458%  (        2)
00:10:25.841  26452.713 - 26571.869:   98.6806%  (        3)
00:10:25.841  26571.869 - 26691.025:   98.7037%  (        2)
00:10:25.841  26691.025 - 26810.182:   98.7384%  (        3)
00:10:25.841  26810.182 - 26929.338:   98.7731%  (        3)
00:10:25.841  26929.338 - 27048.495:   98.8079%  (        3)
00:10:25.841  27048.495 - 27167.651:   98.8310%  (        2)
00:10:25.841  27167.651 - 27286.807:   98.8657%  (        3)
00:10:25.841  27286.807 - 27405.964:   98.8889%  (        2)
00:10:25.841  27405.964 - 27525.120:   98.9236%  (        3)
00:10:25.841  27525.120 - 27644.276:   98.9583%  (        3)
00:10:25.841  27644.276 - 27763.433:   98.9815%  (        2)
00:10:25.841  27763.433 - 27882.589:   99.0162%  (        3)
00:10:25.841  27882.589 - 28001.745:   99.0509%  (        3)
00:10:25.841  28001.745 - 28120.902:   99.0741%  (        2)
00:10:25.841  28120.902 - 28240.058:   99.1088%  (        3)
00:10:25.841  28240.058 - 28359.215:   99.1435%  (        3)
00:10:25.841  28359.215 - 28478.371:   99.1551%  (        1)
00:10:25.841  28478.371 - 28597.527:   99.1898%  (        3)
00:10:25.841  28597.527 - 28716.684:   99.2130%  (        2)
00:10:25.841  28716.684 - 28835.840:   99.2477%  (        3)
00:10:25.841  28835.840 - 28954.996:   99.2593%  (        1)
00:10:25.841  36938.473 - 37176.785:   99.3056%  (        4)
00:10:25.841  37176.785 - 37415.098:   99.3634%  (        5)
00:10:25.841  37415.098 - 37653.411:   99.4213%  (        5)
00:10:25.841  37653.411 - 37891.724:   99.4792%  (        5)
00:10:25.841  37891.724 - 38130.036:   99.5486%  (        6)
00:10:25.841  38130.036 - 38368.349:   99.6065%  (        5)
00:10:25.841  38368.349 - 38606.662:   99.6644%  (        5)
00:10:25.841  38606.662 - 38844.975:   99.7222%  (        5)
00:10:25.841  38844.975 - 39083.287:   99.7917%  (        6)
00:10:25.841  39083.287 - 39321.600:   99.8495%  (        5)
00:10:25.841  39321.600 - 39559.913:   99.9074%  (        5)
00:10:25.841  39559.913 - 39798.225:   99.9769%  (        6)
00:10:25.841  39798.225 - 40036.538:  100.0000%  (        2)
00:10:25.841  
00:10:25.841  Latency histogram for PCIE (0000:00:12.0) NSID 1                  from core 0:
00:10:25.841  ==============================================================================
00:10:25.841         Range in us     Cumulative    IO count
00:10:25.841   9889.978 -  9949.556:    0.0347%  (        3)
00:10:25.841   9949.556 - 10009.135:    0.1505%  (       10)
00:10:25.841  10009.135 - 10068.713:    0.4167%  (       23)
00:10:25.841  10068.713 - 10128.291:    0.6713%  (       22)
00:10:25.841  10128.291 - 10187.869:    0.9722%  (       26)
00:10:25.841  10187.869 - 10247.447:    1.4468%  (       41)
00:10:25.841  10247.447 - 10307.025:    1.9329%  (       42)
00:10:25.841  10307.025 - 10366.604:    2.4421%  (       44)
00:10:25.841  10366.604 - 10426.182:    3.0324%  (       51)
00:10:25.841  10426.182 - 10485.760:    3.6343%  (       52)
00:10:25.841  10485.760 - 10545.338:    4.2708%  (       55)
00:10:25.841  10545.338 - 10604.916:    4.8843%  (       53)
00:10:25.841  10604.916 - 10664.495:    5.5787%  (       60)
00:10:25.841  10664.495 - 10724.073:    6.2037%  (       54)
00:10:25.841  10724.073 - 10783.651:    6.8750%  (       58)
00:10:25.841  10783.651 - 10843.229:    7.6157%  (       64)
00:10:25.841  10843.229 - 10902.807:    8.4259%  (       70)
00:10:25.841  10902.807 - 10962.385:    9.2477%  (       71)
00:10:25.841  10962.385 - 11021.964:   10.1736%  (       80)
00:10:25.841  11021.964 - 11081.542:   11.3542%  (      102)
00:10:25.841  11081.542 - 11141.120:   12.4769%  (       97)
00:10:25.841  11141.120 - 11200.698:   13.4375%  (       83)
00:10:25.841  11200.698 - 11260.276:   14.3750%  (       81)
00:10:25.841  11260.276 - 11319.855:   15.2199%  (       73)
00:10:25.841  11319.855 - 11379.433:   16.1690%  (       82)
00:10:25.841  11379.433 - 11439.011:   17.0370%  (       75)
00:10:25.841  11439.011 - 11498.589:   17.7778%  (       64)
00:10:25.841  11498.589 - 11558.167:   18.5648%  (       68)
00:10:25.841  11558.167 - 11617.745:   19.4444%  (       76)
00:10:25.841  11617.745 - 11677.324:   20.2894%  (       73)
00:10:25.841  11677.324 - 11736.902:   21.4352%  (       99)
00:10:25.841  11736.902 - 11796.480:   22.5579%  (       97)
00:10:25.841  11796.480 - 11856.058:   23.6111%  (       91)
00:10:25.841  11856.058 - 11915.636:   24.7222%  (       96)
00:10:25.841  11915.636 - 11975.215:   25.5787%  (       74)
00:10:25.841  11975.215 - 12034.793:   26.5625%  (       85)
00:10:25.841  12034.793 - 12094.371:   27.5926%  (       89)
00:10:25.841  12094.371 - 12153.949:   28.5301%  (       81)
00:10:25.842  12153.949 - 12213.527:   29.4097%  (       76)
00:10:25.842  12213.527 - 12273.105:   30.3009%  (       77)
00:10:25.842  12273.105 - 12332.684:   31.2037%  (       78)
00:10:25.842  12332.684 - 12392.262:   32.0949%  (       77)
00:10:25.842  12392.262 - 12451.840:   33.0324%  (       81)
00:10:25.842  12451.840 - 12511.418:   33.9005%  (       75)
00:10:25.842  12511.418 - 12570.996:   34.8727%  (       84)
00:10:25.842  12570.996 - 12630.575:   35.9028%  (       89)
00:10:25.842  12630.575 - 12690.153:   36.7824%  (       76)
00:10:25.842  12690.153 - 12749.731:   37.6736%  (       77)
00:10:25.842  12749.731 - 12809.309:   38.4722%  (       69)
00:10:25.842  12809.309 - 12868.887:   39.2593%  (       68)
00:10:25.842  12868.887 - 12928.465:   40.0347%  (       67)
00:10:25.842  12928.465 - 12988.044:   40.7870%  (       65)
00:10:25.842  12988.044 - 13047.622:   41.5394%  (       65)
00:10:25.842  13047.622 - 13107.200:   42.1412%  (       52)
00:10:25.842  13107.200 - 13166.778:   42.7315%  (       51)
00:10:25.842  13166.778 - 13226.356:   43.3333%  (       52)
00:10:25.842  13226.356 - 13285.935:   43.9583%  (       54)
00:10:25.842  13285.935 - 13345.513:   44.6528%  (       60)
00:10:25.842  13345.513 - 13405.091:   45.3009%  (       56)
00:10:25.842  13405.091 - 13464.669:   46.0764%  (       67)
00:10:25.842  13464.669 - 13524.247:   46.8519%  (       67)
00:10:25.842  13524.247 - 13583.825:   47.5000%  (       56)
00:10:25.842  13583.825 - 13643.404:   48.2176%  (       62)
00:10:25.842  13643.404 - 13702.982:   49.0741%  (       74)
00:10:25.842  13702.982 - 13762.560:   49.9769%  (       78)
00:10:25.842  13762.560 - 13822.138:   50.7060%  (       63)
00:10:25.842  13822.138 - 13881.716:   51.4583%  (       65)
00:10:25.842  13881.716 - 13941.295:   52.1181%  (       57)
00:10:25.842  13941.295 - 14000.873:   52.6968%  (       50)
00:10:25.842  14000.873 - 14060.451:   53.2870%  (       51)
00:10:25.842  14060.451 - 14120.029:   53.9468%  (       57)
00:10:25.842  14120.029 - 14179.607:   54.5139%  (       49)
00:10:25.842  14179.607 - 14239.185:   55.2778%  (       66)
00:10:25.842  14239.185 - 14298.764:   55.9722%  (       60)
00:10:25.842  14298.764 - 14358.342:   56.6782%  (       61)
00:10:25.842  14358.342 - 14417.920:   57.3148%  (       55)
00:10:25.842  14417.920 - 14477.498:   58.0324%  (       62)
00:10:25.842  14477.498 - 14537.076:   58.6574%  (       54)
00:10:25.842  14537.076 - 14596.655:   59.3171%  (       57)
00:10:25.842  14596.655 - 14656.233:   59.8843%  (       49)
00:10:25.842  14656.233 - 14715.811:   60.3704%  (       42)
00:10:25.842  14715.811 - 14775.389:   60.8796%  (       44)
00:10:25.842  14775.389 - 14834.967:   61.4583%  (       50)
00:10:25.842  14834.967 - 14894.545:   61.8981%  (       38)
00:10:25.842  14894.545 - 14954.124:   62.3611%  (       40)
00:10:25.842  14954.124 - 15013.702:   62.8241%  (       40)
00:10:25.842  15013.702 - 15073.280:   63.2292%  (       35)
00:10:25.842  15073.280 - 15132.858:   63.6690%  (       38)
00:10:25.842  15132.858 - 15192.436:   64.0972%  (       37)
00:10:25.842  15192.436 - 15252.015:   64.5370%  (       38)
00:10:25.842  15252.015 - 15371.171:   65.2546%  (       62)
00:10:25.842  15371.171 - 15490.327:   65.8565%  (       52)
00:10:25.842  15490.327 - 15609.484:   66.5972%  (       64)
00:10:25.842  15609.484 - 15728.640:   67.1644%  (       49)
00:10:25.842  15728.640 - 15847.796:   67.6157%  (       39)
00:10:25.842  15847.796 - 15966.953:   67.9051%  (       25)
00:10:25.842  15966.953 - 16086.109:   68.3333%  (       37)
00:10:25.842  16086.109 - 16205.265:   68.9352%  (       52)
00:10:25.842  16205.265 - 16324.422:   69.6875%  (       65)
00:10:25.842  16324.422 - 16443.578:   70.3935%  (       61)
00:10:25.842  16443.578 - 16562.735:   71.0185%  (       54)
00:10:25.842  16562.735 - 16681.891:   71.5625%  (       47)
00:10:25.842  16681.891 - 16801.047:   72.2106%  (       56)
00:10:25.842  16801.047 - 16920.204:   72.8935%  (       59)
00:10:25.842  16920.204 - 17039.360:   73.5764%  (       59)
00:10:25.842  17039.360 - 17158.516:   74.1435%  (       49)
00:10:25.842  17158.516 - 17277.673:   74.9537%  (       70)
00:10:25.842  17277.673 - 17396.829:   75.5787%  (       54)
00:10:25.842  17396.829 - 17515.985:   76.1458%  (       49)
00:10:25.842  17515.985 - 17635.142:   76.7477%  (       52)
00:10:25.842  17635.142 - 17754.298:   77.4421%  (       60)
00:10:25.842  17754.298 - 17873.455:   78.1134%  (       58)
00:10:25.842  17873.455 - 17992.611:   78.8426%  (       63)
00:10:25.842  17992.611 - 18111.767:   79.5833%  (       64)
00:10:25.842  18111.767 - 18230.924:   80.3009%  (       62)
00:10:25.842  18230.924 - 18350.080:   81.0532%  (       65)
00:10:25.842  18350.080 - 18469.236:   81.8171%  (       66)
00:10:25.842  18469.236 - 18588.393:   82.6620%  (       73)
00:10:25.842  18588.393 - 18707.549:   83.5880%  (       80)
00:10:25.842  18707.549 - 18826.705:   84.5602%  (       84)
00:10:25.842  18826.705 - 18945.862:   85.4630%  (       78)
00:10:25.842  18945.862 - 19065.018:   86.2500%  (       68)
00:10:25.842  19065.018 - 19184.175:   87.0370%  (       68)
00:10:25.842  19184.175 - 19303.331:   87.7662%  (       63)
00:10:25.842  19303.331 - 19422.487:   88.4606%  (       60)
00:10:25.842  19422.487 - 19541.644:   89.1435%  (       59)
00:10:25.842  19541.644 - 19660.800:   89.8032%  (       57)
00:10:25.842  19660.800 - 19779.956:   90.4630%  (       57)
00:10:25.842  19779.956 - 19899.113:   91.0301%  (       49)
00:10:25.842  19899.113 - 20018.269:   91.5278%  (       43)
00:10:25.842  20018.269 - 20137.425:   92.0602%  (       46)
00:10:25.842  20137.425 - 20256.582:   92.5231%  (       40)
00:10:25.842  20256.582 - 20375.738:   92.9398%  (       36)
00:10:25.842  20375.738 - 20494.895:   93.3333%  (       34)
00:10:25.842  20494.895 - 20614.051:   93.7037%  (       32)
00:10:25.842  20614.051 - 20733.207:   94.0278%  (       28)
00:10:25.842  20733.207 - 20852.364:   94.4560%  (       37)
00:10:25.842  20852.364 - 20971.520:   94.8495%  (       34)
00:10:25.842  20971.520 - 21090.676:   95.1042%  (       22)
00:10:25.842  21090.676 - 21209.833:   95.2894%  (       16)
00:10:25.842  21209.833 - 21328.989:   95.4861%  (       17)
00:10:25.842  21328.989 - 21448.145:   95.6134%  (       11)
00:10:25.842  21448.145 - 21567.302:   95.7407%  (       11)
00:10:25.842  21567.302 - 21686.458:   95.8565%  (       10)
00:10:25.842  21686.458 - 21805.615:   95.9722%  (       10)
00:10:25.842  21805.615 - 21924.771:   96.2153%  (       21)
00:10:25.842  21924.771 - 22043.927:   96.4236%  (       18)
00:10:25.842  22043.927 - 22163.084:   96.6435%  (       19)
00:10:25.842  22163.084 - 22282.240:   96.8750%  (       20)
00:10:25.842  22282.240 - 22401.396:   97.0370%  (       14)
00:10:25.842  22401.396 - 22520.553:   97.1644%  (       11)
00:10:25.842  22520.553 - 22639.709:   97.2917%  (       11)
00:10:25.842  22639.709 - 22758.865:   97.3958%  (        9)
00:10:25.842  22758.865 - 22878.022:   97.4884%  (        8)
00:10:25.842  22878.022 - 22997.178:   97.6042%  (       10)
00:10:25.842  22997.178 - 23116.335:   97.6968%  (        8)
00:10:25.842  23116.335 - 23235.491:   97.8125%  (       10)
00:10:25.842  23235.491 - 23354.647:   97.9398%  (       11)
00:10:25.842  23354.647 - 23473.804:   98.0208%  (        7)
00:10:25.842  23473.804 - 23592.960:   98.0903%  (        6)
00:10:25.842  23592.960 - 23712.116:   98.1829%  (        8)
00:10:25.842  23712.116 - 23831.273:   98.2523%  (        6)
00:10:25.842  23831.273 - 23950.429:   98.2986%  (        4)
00:10:25.842  23950.429 - 24069.585:   98.3681%  (        6)
00:10:25.842  24069.585 - 24188.742:   98.4259%  (        5)
00:10:25.842  24188.742 - 24307.898:   98.4954%  (        6)
00:10:25.842  24307.898 - 24427.055:   98.5185%  (        2)
00:10:25.842  24784.524 - 24903.680:   98.5417%  (        2)
00:10:25.842  24903.680 - 25022.836:   98.5995%  (        5)
00:10:25.842  25022.836 - 25141.993:   98.6227%  (        2)
00:10:25.842  25141.993 - 25261.149:   98.6458%  (        2)
00:10:25.842  25261.149 - 25380.305:   98.6806%  (        3)
00:10:25.842  25380.305 - 25499.462:   98.7153%  (        3)
00:10:25.842  25499.462 - 25618.618:   98.7384%  (        2)
00:10:25.842  25618.618 - 25737.775:   98.7731%  (        3)
00:10:25.842  25737.775 - 25856.931:   98.8079%  (        3)
00:10:25.842  25856.931 - 25976.087:   98.8310%  (        2)
00:10:25.842  25976.087 - 26095.244:   98.8657%  (        3)
00:10:25.842  26095.244 - 26214.400:   98.9005%  (        3)
00:10:25.842  26214.400 - 26333.556:   98.9236%  (        2)
00:10:25.842  26333.556 - 26452.713:   98.9583%  (        3)
00:10:25.842  26452.713 - 26571.869:   98.9931%  (        3)
00:10:25.842  26571.869 - 26691.025:   99.0162%  (        2)
00:10:25.842  26691.025 - 26810.182:   99.0394%  (        2)
00:10:25.842  26810.182 - 26929.338:   99.0741%  (        3)
00:10:25.842  26929.338 - 27048.495:   99.1088%  (        3)
00:10:25.842  27048.495 - 27167.651:   99.1319%  (        2)
00:10:25.842  27167.651 - 27286.807:   99.1667%  (        3)
00:10:25.842  27286.807 - 27405.964:   99.1898%  (        2)
00:10:25.842  27405.964 - 27525.120:   99.2245%  (        3)
00:10:25.842  27525.120 - 27644.276:   99.2593%  (        3)
00:10:25.842  35746.909 - 35985.222:   99.3171%  (        5)
00:10:25.842  35985.222 - 36223.535:   99.3750%  (        5)
00:10:25.842  36223.535 - 36461.847:   99.4444%  (        6)
00:10:25.842  36461.847 - 36700.160:   99.5023%  (        5)
00:10:25.842  36700.160 - 36938.473:   99.5602%  (        5)
00:10:25.842  36938.473 - 37176.785:   99.6296%  (        6)
00:10:25.843  37176.785 - 37415.098:   99.6875%  (        5)
00:10:25.843  37415.098 - 37653.411:   99.7454%  (        5)
00:10:25.843  37653.411 - 37891.724:   99.8148%  (        6)
00:10:25.843  37891.724 - 38130.036:   99.8727%  (        5)
00:10:25.843  38130.036 - 38368.349:   99.9306%  (        5)
00:10:25.843  38368.349 - 38606.662:  100.0000%  (        6)
00:10:25.843  
00:10:25.843  Latency histogram for PCIE (0000:00:12.0) NSID 2                  from core 0:
00:10:25.843  ==============================================================================
00:10:25.843         Range in us     Cumulative    IO count
00:10:25.843   9711.244 -  9770.822:    0.0116%  (        1)
00:10:25.843   9770.822 -  9830.400:    0.0231%  (        1)
00:10:25.843   9830.400 -  9889.978:    0.0347%  (        1)
00:10:25.843   9949.556 - 10009.135:    0.1157%  (        7)
00:10:25.843  10009.135 - 10068.713:    0.3819%  (       23)
00:10:25.843  10068.713 - 10128.291:    0.7870%  (       35)
00:10:25.843  10128.291 - 10187.869:    1.3426%  (       48)
00:10:25.843  10187.869 - 10247.447:    1.8750%  (       46)
00:10:25.843  10247.447 - 10307.025:    2.3958%  (       45)
00:10:25.843  10307.025 - 10366.604:    2.8935%  (       43)
00:10:25.843  10366.604 - 10426.182:    3.3681%  (       41)
00:10:25.843  10426.182 - 10485.760:    4.1435%  (       67)
00:10:25.843  10485.760 - 10545.338:    4.8264%  (       59)
00:10:25.843  10545.338 - 10604.916:    5.5787%  (       65)
00:10:25.843  10604.916 - 10664.495:    6.3426%  (       66)
00:10:25.843  10664.495 - 10724.073:    7.0370%  (       60)
00:10:25.843  10724.073 - 10783.651:    7.7083%  (       58)
00:10:25.843  10783.651 - 10843.229:    8.4954%  (       68)
00:10:25.843  10843.229 - 10902.807:    9.2940%  (       69)
00:10:25.843  10902.807 - 10962.385:   10.1736%  (       76)
00:10:25.843  10962.385 - 11021.964:   10.9491%  (       67)
00:10:25.843  11021.964 - 11081.542:   11.8287%  (       76)
00:10:25.843  11081.542 - 11141.120:   12.7431%  (       79)
00:10:25.843  11141.120 - 11200.698:   13.7269%  (       85)
00:10:25.843  11200.698 - 11260.276:   14.5370%  (       70)
00:10:25.843  11260.276 - 11319.855:   15.3241%  (       68)
00:10:25.843  11319.855 - 11379.433:   16.1574%  (       72)
00:10:25.843  11379.433 - 11439.011:   16.9676%  (       70)
00:10:25.843  11439.011 - 11498.589:   17.9282%  (       83)
00:10:25.843  11498.589 - 11558.167:   18.9005%  (       84)
00:10:25.843  11558.167 - 11617.745:   19.7917%  (       77)
00:10:25.843  11617.745 - 11677.324:   20.6481%  (       74)
00:10:25.843  11677.324 - 11736.902:   21.5162%  (       75)
00:10:25.843  11736.902 - 11796.480:   22.3843%  (       75)
00:10:25.843  11796.480 - 11856.058:   23.4028%  (       88)
00:10:25.843  11856.058 - 11915.636:   24.3981%  (       86)
00:10:25.843  11915.636 - 11975.215:   25.4167%  (       88)
00:10:25.843  11975.215 - 12034.793:   26.4931%  (       93)
00:10:25.843  12034.793 - 12094.371:   27.6389%  (       99)
00:10:25.843  12094.371 - 12153.949:   28.7500%  (       96)
00:10:25.843  12153.949 - 12213.527:   29.8495%  (       95)
00:10:25.843  12213.527 - 12273.105:   30.8218%  (       84)
00:10:25.843  12273.105 - 12332.684:   31.6898%  (       75)
00:10:25.843  12332.684 - 12392.262:   32.6042%  (       79)
00:10:25.843  12392.262 - 12451.840:   33.4838%  (       76)
00:10:25.843  12451.840 - 12511.418:   34.4560%  (       84)
00:10:25.843  12511.418 - 12570.996:   35.3356%  (       76)
00:10:25.843  12570.996 - 12630.575:   36.2037%  (       75)
00:10:25.843  12630.575 - 12690.153:   37.1181%  (       79)
00:10:25.843  12690.153 - 12749.731:   38.1134%  (       86)
00:10:25.843  12749.731 - 12809.309:   38.9236%  (       70)
00:10:25.843  12809.309 - 12868.887:   39.8727%  (       82)
00:10:25.843  12868.887 - 12928.465:   40.5671%  (       60)
00:10:25.843  12928.465 - 12988.044:   41.2500%  (       59)
00:10:25.843  12988.044 - 13047.622:   41.9213%  (       58)
00:10:25.843  13047.622 - 13107.200:   42.5347%  (       53)
00:10:25.843  13107.200 - 13166.778:   43.2407%  (       61)
00:10:25.843  13166.778 - 13226.356:   43.8889%  (       56)
00:10:25.843  13226.356 - 13285.935:   44.5486%  (       57)
00:10:25.843  13285.935 - 13345.513:   45.2431%  (       60)
00:10:25.843  13345.513 - 13405.091:   45.9028%  (       57)
00:10:25.843  13405.091 - 13464.669:   46.5394%  (       55)
00:10:25.843  13464.669 - 13524.247:   47.3032%  (       66)
00:10:25.843  13524.247 - 13583.825:   47.9051%  (       52)
00:10:25.843  13583.825 - 13643.404:   48.5880%  (       59)
00:10:25.843  13643.404 - 13702.982:   49.3056%  (       62)
00:10:25.843  13702.982 - 13762.560:   49.9190%  (       53)
00:10:25.843  13762.560 - 13822.138:   50.5208%  (       52)
00:10:25.843  13822.138 - 13881.716:   51.0880%  (       49)
00:10:25.843  13881.716 - 13941.295:   51.6898%  (       52)
00:10:25.843  13941.295 - 14000.873:   52.2801%  (       51)
00:10:25.843  14000.873 - 14060.451:   52.8356%  (       48)
00:10:25.843  14060.451 - 14120.029:   53.4259%  (       51)
00:10:25.843  14120.029 - 14179.607:   53.9815%  (       48)
00:10:25.843  14179.607 - 14239.185:   54.6991%  (       62)
00:10:25.843  14239.185 - 14298.764:   55.4745%  (       67)
00:10:25.843  14298.764 - 14358.342:   56.2269%  (       65)
00:10:25.843  14358.342 - 14417.920:   57.0139%  (       68)
00:10:25.843  14417.920 - 14477.498:   57.7662%  (       65)
00:10:25.843  14477.498 - 14537.076:   58.6227%  (       74)
00:10:25.843  14537.076 - 14596.655:   59.3171%  (       60)
00:10:25.843  14596.655 - 14656.233:   60.0347%  (       62)
00:10:25.843  14656.233 - 14715.811:   60.6366%  (       52)
00:10:25.843  14715.811 - 14775.389:   61.1458%  (       44)
00:10:25.843  14775.389 - 14834.967:   61.7130%  (       49)
00:10:25.843  14834.967 - 14894.545:   62.3032%  (       51)
00:10:25.843  14894.545 - 14954.124:   62.7431%  (       38)
00:10:25.843  14954.124 - 15013.702:   63.1597%  (       36)
00:10:25.843  15013.702 - 15073.280:   63.6111%  (       39)
00:10:25.843  15073.280 - 15132.858:   64.0625%  (       39)
00:10:25.843  15132.858 - 15192.436:   64.4444%  (       33)
00:10:25.843  15192.436 - 15252.015:   64.8032%  (       31)
00:10:25.843  15252.015 - 15371.171:   65.4282%  (       54)
00:10:25.843  15371.171 - 15490.327:   65.9144%  (       42)
00:10:25.843  15490.327 - 15609.484:   66.4815%  (       49)
00:10:25.843  15609.484 - 15728.640:   67.0602%  (       50)
00:10:25.843  15728.640 - 15847.796:   67.5579%  (       43)
00:10:25.843  15847.796 - 15966.953:   68.0903%  (       46)
00:10:25.843  15966.953 - 16086.109:   68.5185%  (       37)
00:10:25.843  16086.109 - 16205.265:   68.8194%  (       26)
00:10:25.843  16205.265 - 16324.422:   69.1667%  (       30)
00:10:25.843  16324.422 - 16443.578:   69.7454%  (       50)
00:10:25.843  16443.578 - 16562.735:   70.4977%  (       65)
00:10:25.843  16562.735 - 16681.891:   71.4468%  (       82)
00:10:25.843  16681.891 - 16801.047:   72.1412%  (       60)
00:10:25.843  16801.047 - 16920.204:   72.7431%  (       52)
00:10:25.843  16920.204 - 17039.360:   73.5764%  (       72)
00:10:25.843  17039.360 - 17158.516:   74.3634%  (       68)
00:10:25.843  17158.516 - 17277.673:   75.0463%  (       59)
00:10:25.843  17277.673 - 17396.829:   75.8102%  (       66)
00:10:25.843  17396.829 - 17515.985:   76.6319%  (       71)
00:10:25.843  17515.985 - 17635.142:   77.4190%  (       68)
00:10:25.843  17635.142 - 17754.298:   78.0556%  (       55)
00:10:25.843  17754.298 - 17873.455:   78.7500%  (       60)
00:10:25.843  17873.455 - 17992.611:   79.5255%  (       67)
00:10:25.843  17992.611 - 18111.767:   80.2431%  (       62)
00:10:25.843  18111.767 - 18230.924:   81.0532%  (       70)
00:10:25.843  18230.924 - 18350.080:   81.7708%  (       62)
00:10:25.843  18350.080 - 18469.236:   82.5579%  (       68)
00:10:25.843  18469.236 - 18588.393:   83.3681%  (       70)
00:10:25.843  18588.393 - 18707.549:   84.2708%  (       78)
00:10:25.843  18707.549 - 18826.705:   84.9769%  (       61)
00:10:25.843  18826.705 - 18945.862:   85.6597%  (       59)
00:10:25.843  18945.862 - 19065.018:   86.3426%  (       59)
00:10:25.843  19065.018 - 19184.175:   87.0718%  (       63)
00:10:25.843  19184.175 - 19303.331:   87.8125%  (       64)
00:10:25.843  19303.331 - 19422.487:   88.5532%  (       64)
00:10:25.843  19422.487 - 19541.644:   89.2130%  (       57)
00:10:25.843  19541.644 - 19660.800:   89.7338%  (       45)
00:10:25.843  19660.800 - 19779.956:   90.1968%  (       40)
00:10:25.843  19779.956 - 19899.113:   90.6366%  (       38)
00:10:25.843  19899.113 - 20018.269:   91.0532%  (       36)
00:10:25.843  20018.269 - 20137.425:   91.4352%  (       33)
00:10:25.843  20137.425 - 20256.582:   91.9444%  (       44)
00:10:25.843  20256.582 - 20375.738:   92.4421%  (       43)
00:10:25.843  20375.738 - 20494.895:   92.8241%  (       33)
00:10:25.843  20494.895 - 20614.051:   93.1481%  (       28)
00:10:25.843  20614.051 - 20733.207:   93.4028%  (       22)
00:10:25.843  20733.207 - 20852.364:   93.6921%  (       25)
00:10:25.843  20852.364 - 20971.520:   93.9468%  (       22)
00:10:25.843  20971.520 - 21090.676:   94.2130%  (       23)
00:10:25.843  21090.676 - 21209.833:   94.4676%  (       22)
00:10:25.843  21209.833 - 21328.989:   94.7569%  (       25)
00:10:25.843  21328.989 - 21448.145:   95.0000%  (       21)
00:10:25.843  21448.145 - 21567.302:   95.1852%  (       16)
00:10:25.843  21567.302 - 21686.458:   95.4167%  (       20)
00:10:25.843  21686.458 - 21805.615:   95.6134%  (       17)
00:10:25.843  21805.615 - 21924.771:   95.8333%  (       19)
00:10:25.843  21924.771 - 22043.927:   96.0880%  (       22)
00:10:25.843  22043.927 - 22163.084:   96.3426%  (       22)
00:10:25.843  22163.084 - 22282.240:   96.5278%  (       16)
00:10:25.843  22282.240 - 22401.396:   96.7130%  (       16)
00:10:25.843  22401.396 - 22520.553:   96.8866%  (       15)
00:10:25.843  22520.553 - 22639.709:   97.0949%  (       18)
00:10:25.843  22639.709 - 22758.865:   97.2454%  (       13)
00:10:25.843  22758.865 - 22878.022:   97.4190%  (       15)
00:10:25.843  22878.022 - 22997.178:   97.5926%  (       15)
00:10:25.843  22997.178 - 23116.335:   97.7431%  (       13)
00:10:25.843  23116.335 - 23235.491:   97.8819%  (       12)
00:10:25.843  23235.491 - 23354.647:   98.0093%  (       11)
00:10:25.843  23354.647 - 23473.804:   98.1019%  (        8)
00:10:25.843  23473.804 - 23592.960:   98.2176%  (       10)
00:10:25.843  23592.960 - 23712.116:   98.3333%  (       10)
00:10:25.843  23712.116 - 23831.273:   98.4144%  (        7)
00:10:25.843  23831.273 - 23950.429:   98.4838%  (        6)
00:10:25.843  23950.429 - 24069.585:   98.5648%  (        7)
00:10:25.843  24069.585 - 24188.742:   98.6343%  (        6)
00:10:25.843  24188.742 - 24307.898:   98.7153%  (        7)
00:10:25.843  24307.898 - 24427.055:   98.7616%  (        4)
00:10:25.843  24427.055 - 24546.211:   98.7963%  (        3)
00:10:25.843  24546.211 - 24665.367:   98.8310%  (        3)
00:10:25.843  24665.367 - 24784.524:   98.8542%  (        2)
00:10:25.843  24784.524 - 24903.680:   98.8889%  (        3)
00:10:25.843  24903.680 - 25022.836:   98.9120%  (        2)
00:10:25.843  25022.836 - 25141.993:   98.9468%  (        3)
00:10:25.843  25141.993 - 25261.149:   98.9699%  (        2)
00:10:25.843  25261.149 - 25380.305:   99.0046%  (        3)
00:10:25.843  25380.305 - 25499.462:   99.0394%  (        3)
00:10:25.843  25499.462 - 25618.618:   99.0741%  (        3)
00:10:25.844  25618.618 - 25737.775:   99.0972%  (        2)
00:10:25.844  25737.775 - 25856.931:   99.1319%  (        3)
00:10:25.844  25856.931 - 25976.087:   99.1551%  (        2)
00:10:25.844  25976.087 - 26095.244:   99.1898%  (        3)
00:10:25.844  26095.244 - 26214.400:   99.2245%  (        3)
00:10:25.844  26214.400 - 26333.556:   99.2593%  (        3)
00:10:25.844  34317.033 - 34555.345:   99.2824%  (        2)
00:10:25.844  34555.345 - 34793.658:   99.3403%  (        5)
00:10:25.844  34793.658 - 35031.971:   99.4097%  (        6)
00:10:25.844  35031.971 - 35270.284:   99.4560%  (        4)
00:10:25.844  35270.284 - 35508.596:   99.5139%  (        5)
00:10:25.844  35508.596 - 35746.909:   99.5833%  (        6)
00:10:25.844  35746.909 - 35985.222:   99.6412%  (        5)
00:10:25.844  35985.222 - 36223.535:   99.6991%  (        5)
00:10:25.844  36223.535 - 36461.847:   99.7685%  (        6)
00:10:25.844  36461.847 - 36700.160:   99.8264%  (        5)
00:10:25.844  36700.160 - 36938.473:   99.8958%  (        6)
00:10:25.844  36938.473 - 37176.785:   99.9537%  (        5)
00:10:25.844  37176.785 - 37415.098:  100.0000%  (        4)
00:10:25.844  
00:10:25.844  Latency histogram for PCIE (0000:00:12.0) NSID 3                  from core 0:
00:10:25.844  ==============================================================================
00:10:25.844         Range in us     Cumulative    IO count
00:10:25.844   9889.978 -  9949.556:    0.0116%  (        1)
00:10:25.844   9949.556 - 10009.135:    0.0579%  (        4)
00:10:25.844  10009.135 - 10068.713:    0.2083%  (       13)
00:10:25.844  10068.713 - 10128.291:    0.4398%  (       20)
00:10:25.844  10128.291 - 10187.869:    0.8681%  (       37)
00:10:25.844  10187.869 - 10247.447:    1.4236%  (       48)
00:10:25.844  10247.447 - 10307.025:    2.0833%  (       57)
00:10:25.844  10307.025 - 10366.604:    2.8588%  (       67)
00:10:25.844  10366.604 - 10426.182:    3.4606%  (       52)
00:10:25.844  10426.182 - 10485.760:    4.1088%  (       56)
00:10:25.844  10485.760 - 10545.338:    4.6065%  (       43)
00:10:25.844  10545.338 - 10604.916:    5.3588%  (       65)
00:10:25.844  10604.916 - 10664.495:    6.1458%  (       68)
00:10:25.844  10664.495 - 10724.073:    6.9444%  (       69)
00:10:25.844  10724.073 - 10783.651:    7.6852%  (       64)
00:10:25.844  10783.651 - 10843.229:    8.4491%  (       66)
00:10:25.844  10843.229 - 10902.807:    9.1435%  (       60)
00:10:25.844  10902.807 - 10962.385:    9.8495%  (       61)
00:10:25.844  10962.385 - 11021.964:   10.5671%  (       62)
00:10:25.844  11021.964 - 11081.542:   11.4931%  (       80)
00:10:25.844  11081.542 - 11141.120:   12.2917%  (       69)
00:10:25.844  11141.120 - 11200.698:   13.2986%  (       87)
00:10:25.844  11200.698 - 11260.276:   14.2245%  (       80)
00:10:25.844  11260.276 - 11319.855:   15.2083%  (       85)
00:10:25.844  11319.855 - 11379.433:   15.9954%  (       68)
00:10:25.844  11379.433 - 11439.011:   16.7824%  (       68)
00:10:25.844  11439.011 - 11498.589:   17.4884%  (       61)
00:10:25.844  11498.589 - 11558.167:   18.2639%  (       67)
00:10:25.844  11558.167 - 11617.745:   19.0856%  (       71)
00:10:25.844  11617.745 - 11677.324:   19.8032%  (       62)
00:10:25.844  11677.324 - 11736.902:   20.6134%  (       70)
00:10:25.844  11736.902 - 11796.480:   21.3657%  (       65)
00:10:25.844  11796.480 - 11856.058:   22.2106%  (       73)
00:10:25.844  11856.058 - 11915.636:   23.1713%  (       83)
00:10:25.844  11915.636 - 11975.215:   24.1667%  (       86)
00:10:25.844  11975.215 - 12034.793:   25.2546%  (       94)
00:10:25.844  12034.793 - 12094.371:   26.3889%  (       98)
00:10:25.844  12094.371 - 12153.949:   27.4190%  (       89)
00:10:25.844  12153.949 - 12213.527:   28.3565%  (       81)
00:10:25.844  12213.527 - 12273.105:   29.3519%  (       86)
00:10:25.844  12273.105 - 12332.684:   30.4514%  (       95)
00:10:25.844  12332.684 - 12392.262:   31.6435%  (      103)
00:10:25.844  12392.262 - 12451.840:   32.8125%  (      101)
00:10:25.844  12451.840 - 12511.418:   33.7616%  (       82)
00:10:25.844  12511.418 - 12570.996:   34.7801%  (       88)
00:10:25.844  12570.996 - 12630.575:   35.7407%  (       83)
00:10:25.844  12630.575 - 12690.153:   36.7245%  (       85)
00:10:25.844  12690.153 - 12749.731:   37.6157%  (       77)
00:10:25.844  12749.731 - 12809.309:   38.5532%  (       81)
00:10:25.844  12809.309 - 12868.887:   39.3750%  (       71)
00:10:25.844  12868.887 - 12928.465:   40.2315%  (       74)
00:10:25.844  12928.465 - 12988.044:   41.0301%  (       69)
00:10:25.844  12988.044 - 13047.622:   41.8287%  (       69)
00:10:25.844  13047.622 - 13107.200:   42.6157%  (       68)
00:10:25.844  13107.200 - 13166.778:   43.4375%  (       71)
00:10:25.844  13166.778 - 13226.356:   44.3519%  (       79)
00:10:25.844  13226.356 - 13285.935:   45.2315%  (       76)
00:10:25.844  13285.935 - 13345.513:   45.9954%  (       66)
00:10:25.844  13345.513 - 13405.091:   46.8171%  (       71)
00:10:25.844  13405.091 - 13464.669:   47.6042%  (       68)
00:10:25.844  13464.669 - 13524.247:   48.2639%  (       57)
00:10:25.844  13524.247 - 13583.825:   48.9699%  (       61)
00:10:25.844  13583.825 - 13643.404:   49.6065%  (       55)
00:10:25.844  13643.404 - 13702.982:   50.2778%  (       58)
00:10:25.844  13702.982 - 13762.560:   50.9259%  (       56)
00:10:25.844  13762.560 - 13822.138:   51.6204%  (       60)
00:10:25.844  13822.138 - 13881.716:   52.3264%  (       61)
00:10:25.844  13881.716 - 13941.295:   52.9398%  (       53)
00:10:25.844  13941.295 - 14000.873:   53.4491%  (       44)
00:10:25.844  14000.873 - 14060.451:   53.9468%  (       43)
00:10:25.844  14060.451 - 14120.029:   54.4213%  (       41)
00:10:25.844  14120.029 - 14179.607:   54.9769%  (       48)
00:10:25.844  14179.607 - 14239.185:   55.5208%  (       47)
00:10:25.844  14239.185 - 14298.764:   56.0880%  (       49)
00:10:25.844  14298.764 - 14358.342:   56.6898%  (       52)
00:10:25.844  14358.342 - 14417.920:   57.3843%  (       60)
00:10:25.844  14417.920 - 14477.498:   58.0324%  (       56)
00:10:25.844  14477.498 - 14537.076:   58.7037%  (       58)
00:10:25.844  14537.076 - 14596.655:   59.3519%  (       56)
00:10:25.844  14596.655 - 14656.233:   60.0116%  (       57)
00:10:25.844  14656.233 - 14715.811:   60.7176%  (       61)
00:10:25.844  14715.811 - 14775.389:   61.3426%  (       54)
00:10:25.844  14775.389 - 14834.967:   61.8287%  (       42)
00:10:25.844  14834.967 - 14894.545:   62.3032%  (       41)
00:10:25.844  14894.545 - 14954.124:   62.6736%  (       32)
00:10:25.844  14954.124 - 15013.702:   62.9977%  (       28)
00:10:25.844  15013.702 - 15073.280:   63.3449%  (       30)
00:10:25.844  15073.280 - 15132.858:   63.7384%  (       34)
00:10:25.844  15132.858 - 15192.436:   64.0625%  (       28)
00:10:25.844  15192.436 - 15252.015:   64.3287%  (       23)
00:10:25.844  15252.015 - 15371.171:   64.8495%  (       45)
00:10:25.844  15371.171 - 15490.327:   65.3125%  (       40)
00:10:25.844  15490.327 - 15609.484:   65.7986%  (       42)
00:10:25.844  15609.484 - 15728.640:   66.3657%  (       49)
00:10:25.844  15728.640 - 15847.796:   66.8056%  (       38)
00:10:25.844  15847.796 - 15966.953:   67.1759%  (       32)
00:10:25.844  15966.953 - 16086.109:   67.5116%  (       29)
00:10:25.844  16086.109 - 16205.265:   68.0208%  (       44)
00:10:25.844  16205.265 - 16324.422:   68.5417%  (       45)
00:10:25.844  16324.422 - 16443.578:   69.2593%  (       62)
00:10:25.844  16443.578 - 16562.735:   69.9421%  (       59)
00:10:25.844  16562.735 - 16681.891:   70.8218%  (       76)
00:10:25.844  16681.891 - 16801.047:   71.6782%  (       74)
00:10:25.844  16801.047 - 16920.204:   72.5579%  (       76)
00:10:25.844  16920.204 - 17039.360:   73.3796%  (       71)
00:10:25.844  17039.360 - 17158.516:   74.2130%  (       72)
00:10:25.844  17158.516 - 17277.673:   75.0694%  (       74)
00:10:25.844  17277.673 - 17396.829:   75.9375%  (       75)
00:10:25.844  17396.829 - 17515.985:   76.7361%  (       69)
00:10:25.844  17515.985 - 17635.142:   77.4653%  (       63)
00:10:25.844  17635.142 - 17754.298:   78.5185%  (       91)
00:10:25.844  17754.298 - 17873.455:   79.3981%  (       76)
00:10:25.844  17873.455 - 17992.611:   80.0694%  (       58)
00:10:25.844  17992.611 - 18111.767:   80.6366%  (       49)
00:10:25.844  18111.767 - 18230.924:   81.1806%  (       47)
00:10:25.844  18230.924 - 18350.080:   82.0023%  (       71)
00:10:25.844  18350.080 - 18469.236:   82.8588%  (       74)
00:10:25.844  18469.236 - 18588.393:   83.7384%  (       76)
00:10:25.844  18588.393 - 18707.549:   84.6065%  (       75)
00:10:25.844  18707.549 - 18826.705:   85.3588%  (       65)
00:10:25.844  18826.705 - 18945.862:   86.0764%  (       62)
00:10:25.844  18945.862 - 19065.018:   86.7940%  (       62)
00:10:25.844  19065.018 - 19184.175:   87.4769%  (       59)
00:10:25.844  19184.175 - 19303.331:   88.2060%  (       63)
00:10:25.844  19303.331 - 19422.487:   88.9931%  (       68)
00:10:25.844  19422.487 - 19541.644:   89.6644%  (       58)
00:10:25.844  19541.644 - 19660.800:   90.2083%  (       47)
00:10:25.844  19660.800 - 19779.956:   90.6829%  (       41)
00:10:25.844  19779.956 - 19899.113:   91.1574%  (       41)
00:10:25.844  19899.113 - 20018.269:   91.6319%  (       41)
00:10:25.844  20018.269 - 20137.425:   92.1644%  (       46)
00:10:25.844  20137.425 - 20256.582:   92.7083%  (       47)
00:10:25.844  20256.582 - 20375.738:   93.0787%  (       32)
00:10:25.844  20375.738 - 20494.895:   93.4722%  (       34)
00:10:25.844  20494.895 - 20614.051:   93.8889%  (       36)
00:10:25.844  20614.051 - 20733.207:   94.2361%  (       30)
00:10:25.844  20733.207 - 20852.364:   94.4792%  (       21)
00:10:25.844  20852.364 - 20971.520:   94.6528%  (       15)
00:10:25.844  20971.520 - 21090.676:   94.8727%  (       19)
00:10:25.844  21090.676 - 21209.833:   95.0347%  (       14)
00:10:25.844  21209.833 - 21328.989:   95.2431%  (       18)
00:10:25.844  21328.989 - 21448.145:   95.4861%  (       21)
00:10:25.844  21448.145 - 21567.302:   95.7176%  (       20)
00:10:25.844  21567.302 - 21686.458:   95.8796%  (       14)
00:10:25.844  21686.458 - 21805.615:   96.1574%  (       24)
00:10:25.844  21805.615 - 21924.771:   96.3889%  (       20)
00:10:25.844  21924.771 - 22043.927:   96.5972%  (       18)
00:10:25.844  22043.927 - 22163.084:   96.8519%  (       22)
00:10:25.844  22163.084 - 22282.240:   97.0602%  (       18)
00:10:25.844  22282.240 - 22401.396:   97.2801%  (       19)
00:10:25.844  22401.396 - 22520.553:   97.5000%  (       19)
00:10:25.844  22520.553 - 22639.709:   97.7199%  (       19)
00:10:25.844  22639.709 - 22758.865:   97.9051%  (       16)
00:10:25.844  22758.865 - 22878.022:   98.0324%  (       11)
00:10:25.844  22878.022 - 22997.178:   98.1481%  (       10)
00:10:25.844  22997.178 - 23116.335:   98.2870%  (       12)
00:10:25.844  23116.335 - 23235.491:   98.4144%  (       11)
00:10:25.844  23235.491 - 23354.647:   98.5301%  (       10)
00:10:25.844  23354.647 - 23473.804:   98.6227%  (        8)
00:10:25.844  23473.804 - 23592.960:   98.7384%  (       10)
00:10:25.844  23592.960 - 23712.116:   98.8426%  (        9)
00:10:25.844  23712.116 - 23831.273:   98.9236%  (        7)
00:10:25.844  23831.273 - 23950.429:   98.9931%  (        6)
00:10:25.844  23950.429 - 24069.585:   99.0741%  (        7)
00:10:25.844  24069.585 - 24188.742:   99.1435%  (        6)
00:10:25.844  24188.742 - 24307.898:   99.2245%  (        7)
00:10:25.844  24307.898 - 24427.055:   99.2477%  (        2)
00:10:25.845  24427.055 - 24546.211:   99.2593%  (        1)
00:10:25.845  32410.531 - 32648.844:   99.3056%  (        4)
00:10:25.845  32648.844 - 32887.156:   99.3519%  (        4)
00:10:25.845  32887.156 - 33125.469:   99.4097%  (        5)
00:10:25.845  33125.469 - 33363.782:   99.4560%  (        4)
00:10:25.845  33363.782 - 33602.095:   99.5023%  (        4)
00:10:25.845  33602.095 - 33840.407:   99.5486%  (        4)
00:10:25.845  33840.407 - 34078.720:   99.5949%  (        4)
00:10:25.845  34078.720 - 34317.033:   99.6412%  (        4)
00:10:25.845  34317.033 - 34555.345:   99.6875%  (        4)
00:10:25.845  34555.345 - 34793.658:   99.7338%  (        4)
00:10:25.845  34793.658 - 35031.971:   99.7801%  (        4)
00:10:25.845  35031.971 - 35270.284:   99.8380%  (        5)
00:10:25.845  35270.284 - 35508.596:   99.8843%  (        4)
00:10:25.845  35508.596 - 35746.909:   99.9190%  (        3)
00:10:25.845  35746.909 - 35985.222:   99.9653%  (        4)
00:10:25.845  35985.222 - 36223.535:  100.0000%  (        3)
00:10:25.845  
00:10:25.845   14:21:04 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:10:25.845  
00:10:25.845  real	0m2.841s
00:10:25.845  user	0m2.372s
00:10:25.845  sys	0m0.349s
00:10:25.845   14:21:04 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:25.845   14:21:04 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:10:25.845  ************************************
00:10:25.845  END TEST nvme_perf
00:10:25.845  ************************************
00:10:26.104   14:21:04 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:10:26.104   14:21:04 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:10:26.104   14:21:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:26.104   14:21:04 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:26.104  ************************************
00:10:26.104  START TEST nvme_hello_world
00:10:26.104  ************************************
00:10:26.104   14:21:04 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:10:26.362  Initializing NVMe Controllers
00:10:26.362  Attached to 0000:00:10.0
00:10:26.362    Namespace ID: 1 size: 6GB
00:10:26.362  Attached to 0000:00:11.0
00:10:26.362    Namespace ID: 1 size: 5GB
00:10:26.362  Attached to 0000:00:13.0
00:10:26.362    Namespace ID: 1 size: 1GB
00:10:26.362  Attached to 0000:00:12.0
00:10:26.362    Namespace ID: 1 size: 4GB
00:10:26.362    Namespace ID: 2 size: 4GB
00:10:26.362    Namespace ID: 3 size: 4GB
00:10:26.362  Initialization complete.
00:10:26.362  INFO: using host memory buffer for IO
00:10:26.362  Hello world!
00:10:26.362  INFO: using host memory buffer for IO
00:10:26.362  Hello world!
00:10:26.362  INFO: using host memory buffer for IO
00:10:26.362  Hello world!
00:10:26.362  INFO: using host memory buffer for IO
00:10:26.362  Hello world!
00:10:26.362  INFO: using host memory buffer for IO
00:10:26.362  Hello world!
00:10:26.362  INFO: using host memory buffer for IO
00:10:26.362  Hello world!
00:10:26.362  
00:10:26.362  real	0m0.370s
00:10:26.362  user	0m0.121s
00:10:26.362  sys	0m0.191s
00:10:26.362  ************************************
00:10:26.362  END TEST nvme_hello_world
00:10:26.362  ************************************
00:10:26.362   14:21:05 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:26.362   14:21:05 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:10:26.362   14:21:05 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:10:26.362   14:21:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:26.362   14:21:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:26.362   14:21:05 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:26.362  ************************************
00:10:26.362  START TEST nvme_sgl
00:10:26.362  ************************************
00:10:26.362   14:21:05 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:10:26.621  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:10:26.621  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:10:26.621  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:10:26.621  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:10:26.621  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:10:26.621  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:10:26.621  0000:00:11.0: build_io_request_0 Invalid IO length parameter
00:10:26.621  0000:00:11.0: build_io_request_1 Invalid IO length parameter
00:10:26.879  0000:00:11.0: build_io_request_3 Invalid IO length parameter
00:10:26.879  0000:00:11.0: build_io_request_8 Invalid IO length parameter
00:10:26.879  0000:00:11.0: build_io_request_9 Invalid IO length parameter
00:10:26.879  0000:00:11.0: build_io_request_11 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_0 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_1 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_2 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_3 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_4 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_5 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_6 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_7 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_8 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_9 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_10 Invalid IO length parameter
00:10:26.879  0000:00:13.0: build_io_request_11 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_0 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_1 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_2 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_3 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_4 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_5 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_6 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_7 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_8 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_9 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_10 Invalid IO length parameter
00:10:26.879  0000:00:12.0: build_io_request_11 Invalid IO length parameter
00:10:26.879  NVMe Readv/Writev Request test
00:10:26.879  Attached to 0000:00:10.0
00:10:26.879  Attached to 0000:00:11.0
00:10:26.879  Attached to 0000:00:13.0
00:10:26.879  Attached to 0000:00:12.0
00:10:26.879  0000:00:10.0: build_io_request_2 test passed
00:10:26.879  0000:00:10.0: build_io_request_4 test passed
00:10:26.879  0000:00:10.0: build_io_request_5 test passed
00:10:26.879  0000:00:10.0: build_io_request_6 test passed
00:10:26.879  0000:00:10.0: build_io_request_7 test passed
00:10:26.879  0000:00:10.0: build_io_request_10 test passed
00:10:26.879  0000:00:11.0: build_io_request_2 test passed
00:10:26.879  0000:00:11.0: build_io_request_4 test passed
00:10:26.879  0000:00:11.0: build_io_request_5 test passed
00:10:26.879  0000:00:11.0: build_io_request_6 test passed
00:10:26.879  0000:00:11.0: build_io_request_7 test passed
00:10:26.879  0000:00:11.0: build_io_request_10 test passed
00:10:26.879  Cleaning up...
00:10:26.879  
00:10:26.879  real	0m0.424s
00:10:26.879  user	0m0.212s
00:10:26.879  sys	0m0.165s
00:10:26.879   14:21:05 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:26.879   14:21:05 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:10:26.879  ************************************
00:10:26.879  END TEST nvme_sgl
00:10:26.879  ************************************
00:10:26.879   14:21:05 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:10:26.879   14:21:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:26.879   14:21:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:26.879   14:21:05 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:26.879  ************************************
00:10:26.879  START TEST nvme_e2edp
00:10:26.879  ************************************
00:10:26.879   14:21:05 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:10:27.138  NVMe Write/Read with End-to-End data protection test
00:10:27.138  Attached to 0000:00:10.0
00:10:27.138  Attached to 0000:00:11.0
00:10:27.138  Attached to 0000:00:13.0
00:10:27.138  Attached to 0000:00:12.0
00:10:27.138  Cleaning up...
00:10:27.138  
00:10:27.138  real	0m0.394s
00:10:27.138  user	0m0.140s
00:10:27.138  sys	0m0.186s
00:10:27.138   14:21:06 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:27.138  ************************************
00:10:27.138  END TEST nvme_e2edp
00:10:27.138  ************************************
00:10:27.138   14:21:06 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:10:27.396   14:21:06 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:10:27.396   14:21:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:27.396   14:21:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:27.396   14:21:06 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:27.396  ************************************
00:10:27.396  START TEST nvme_reserve
00:10:27.396  ************************************
00:10:27.396   14:21:06 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:10:27.654  =====================================================
00:10:27.654  NVMe Controller at PCI bus 0, device 16, function 0
00:10:27.654  =====================================================
00:10:27.654  Reservations:                Not Supported
00:10:27.654  =====================================================
00:10:27.654  NVMe Controller at PCI bus 0, device 17, function 0
00:10:27.654  =====================================================
00:10:27.654  Reservations:                Not Supported
00:10:27.654  =====================================================
00:10:27.654  NVMe Controller at PCI bus 0, device 19, function 0
00:10:27.654  =====================================================
00:10:27.654  Reservations:                Not Supported
00:10:27.654  =====================================================
00:10:27.654  NVMe Controller at PCI bus 0, device 18, function 0
00:10:27.654  =====================================================
00:10:27.654  Reservations:                Not Supported
00:10:27.654  Reservation test passed
00:10:27.654  
00:10:27.654  real	0m0.426s
00:10:27.654  user	0m0.207s
00:10:27.654  sys	0m0.169s
00:10:27.654   14:21:06 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:27.654   14:21:06 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:10:27.654  ************************************
00:10:27.654  END TEST nvme_reserve
00:10:27.654  ************************************
00:10:27.654   14:21:06 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:10:27.654   14:21:06 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:27.654   14:21:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:27.654   14:21:06 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:27.654  ************************************
00:10:27.654  START TEST nvme_err_injection
00:10:27.654  ************************************
00:10:27.654   14:21:06 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:10:28.220  NVMe Error Injection test
00:10:28.220  Attached to 0000:00:10.0
00:10:28.220  Attached to 0000:00:11.0
00:10:28.220  Attached to 0000:00:13.0
00:10:28.220  Attached to 0000:00:12.0
00:10:28.220  0000:00:12.0: get features failed as expected
00:10:28.220  0000:00:10.0: get features failed as expected
00:10:28.220  0000:00:11.0: get features failed as expected
00:10:28.220  0000:00:13.0: get features failed as expected
00:10:28.220  0000:00:10.0: get features successfully as expected
00:10:28.220  0000:00:11.0: get features successfully as expected
00:10:28.220  0000:00:13.0: get features successfully as expected
00:10:28.220  0000:00:12.0: get features successfully as expected
00:10:28.220  0000:00:10.0: read failed as expected
00:10:28.220  0000:00:11.0: read failed as expected
00:10:28.220  0000:00:13.0: read failed as expected
00:10:28.220  0000:00:12.0: read failed as expected
00:10:28.220  0000:00:10.0: read successfully as expected
00:10:28.220  0000:00:11.0: read successfully as expected
00:10:28.220  0000:00:13.0: read successfully as expected
00:10:28.220  0000:00:12.0: read successfully as expected
00:10:28.220  Cleaning up...
00:10:28.220  
00:10:28.220  real	0m0.429s
00:10:28.220  user	0m0.167s
00:10:28.220  sys	0m0.210s
00:10:28.220   14:21:07 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:28.220  ************************************
00:10:28.220  END TEST nvme_err_injection
00:10:28.220  ************************************
00:10:28.220   14:21:07 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:10:28.220   14:21:07 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:10:28.220   14:21:07 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:10:28.220   14:21:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:28.220   14:21:07 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:28.220  ************************************
00:10:28.220  START TEST nvme_overhead
00:10:28.220  ************************************
00:10:28.220   14:21:07 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:10:29.597  Initializing NVMe Controllers
00:10:29.597  Attached to 0000:00:10.0
00:10:29.597  Attached to 0000:00:11.0
00:10:29.597  Attached to 0000:00:13.0
00:10:29.597  Attached to 0000:00:12.0
00:10:29.597  Initialization complete. Launching workers.
00:10:29.597  submit (in ns)   avg, min, max =  22461.9,  14248.6, 767446.8
00:10:29.597  complete (in ns) avg, min, max =  16005.6,   9361.4, 4067946.4
00:10:29.597  
00:10:29.597  Submit histogram
00:10:29.597  ================
00:10:29.597         Range in us     Cumulative     Count
00:10:29.597     14.196 -    14.255:    0.0092%  (        1)
00:10:29.597     14.371 -    14.429:    0.0276%  (        2)
00:10:29.597     14.429 -    14.487:    0.1749%  (       16)
00:10:29.597     14.487 -    14.545:    0.5522%  (       41)
00:10:29.597     14.545 -    14.604:    1.5278%  (      106)
00:10:29.597     14.604 -    14.662:    3.6631%  (      232)
00:10:29.597     14.662 -    14.720:    6.5992%  (      319)
00:10:29.597     14.720 -    14.778:   10.5200%  (      426)
00:10:29.597     14.778 -    14.836:   13.9162%  (      369)
00:10:29.597     14.836 -    14.895:   16.4289%  (      273)
00:10:29.597     14.895 -    15.011:   20.0276%  (      391)
00:10:29.597     15.011 -    15.127:   21.8684%  (      200)
00:10:29.597     15.127 -    15.244:   22.9452%  (      117)
00:10:29.597     15.244 -    15.360:   23.7184%  (       84)
00:10:29.597     15.360 -    15.476:   24.3810%  (       72)
00:10:29.597     15.476 -    15.593:   24.9241%  (       59)
00:10:29.597     15.593 -    15.709:   25.4763%  (       60)
00:10:29.597     15.709 -    15.825:   26.1482%  (       73)
00:10:29.597     15.825 -    15.942:   26.6360%  (       53)
00:10:29.597     15.942 -    16.058:   27.0318%  (       43)
00:10:29.597     16.058 -    16.175:   27.3263%  (       32)
00:10:29.597     16.175 -    16.291:   27.5748%  (       27)
00:10:29.597     16.291 -    16.407:   27.7773%  (       22)
00:10:29.597     16.407 -    16.524:   27.8969%  (       13)
00:10:29.597     16.524 -    16.640:   28.0350%  (       15)
00:10:29.597     16.640 -    16.756:   28.1546%  (       13)
00:10:29.597     16.756 -    16.873:   28.3203%  (       18)
00:10:29.597     16.873 -    16.989:   28.5504%  (       25)
00:10:29.597     16.989 -    17.105:   28.6148%  (        7)
00:10:29.597     17.105 -    17.222:   28.7437%  (       14)
00:10:29.597     17.222 -    17.338:   28.8449%  (       11)
00:10:29.597     17.338 -    17.455:   28.9001%  (        6)
00:10:29.597     17.455 -    17.571:   28.9738%  (        8)
00:10:29.597     17.571 -    17.687:   29.0290%  (        6)
00:10:29.597     17.687 -    17.804:   29.0566%  (        3)
00:10:29.597     17.804 -    17.920:   29.0750%  (        2)
00:10:29.597     17.920 -    18.036:   29.1302%  (        6)
00:10:29.597     18.036 -    18.153:   29.1578%  (        3)
00:10:29.597     18.153 -    18.269:   29.1855%  (        3)
00:10:29.597     18.269 -    18.385:   29.2039%  (        2)
00:10:29.597     18.385 -    18.502:   29.2223%  (        2)
00:10:29.597     18.502 -    18.618:   29.2867%  (        7)
00:10:29.597     18.618 -    18.735:   29.3143%  (        3)
00:10:29.597     18.851 -    18.967:   29.3511%  (        4)
00:10:29.597     18.967 -    19.084:   29.3971%  (        5)
00:10:29.597     19.084 -    19.200:   29.4248%  (        3)
00:10:29.597     19.200 -    19.316:   29.5260%  (       11)
00:10:29.597     19.316 -    19.433:   29.6272%  (       11)
00:10:29.597     19.433 -    19.549:   29.7653%  (       15)
00:10:29.597     19.549 -    19.665:   29.8850%  (       13)
00:10:29.597     19.665 -    19.782:   30.0414%  (       17)
00:10:29.597     19.782 -    19.898:   30.2715%  (       25)
00:10:29.597     19.898 -    20.015:   30.4096%  (       15)
00:10:29.597     20.015 -    20.131:   30.6857%  (       30)
00:10:29.597     20.131 -    20.247:   30.9802%  (       32)
00:10:29.597     20.247 -    20.364:   31.2563%  (       30)
00:10:29.597     20.364 -    20.480:   31.7441%  (       53)
00:10:29.597     20.480 -    20.596:   32.3332%  (       64)
00:10:29.597     20.596 -    20.713:   33.0143%  (       74)
00:10:29.597     20.713 -    20.829:   33.6125%  (       65)
00:10:29.597     20.829 -    20.945:   34.2108%  (       65)
00:10:29.597     20.945 -    21.062:   34.7722%  (       61)
00:10:29.597     21.062 -    21.178:   35.2876%  (       56)
00:10:29.597     21.178 -    21.295:   35.9411%  (       71)
00:10:29.597     21.295 -    21.411:   36.6590%  (       78)
00:10:29.597     21.411 -    21.527:   37.4137%  (       82)
00:10:29.597     21.527 -    21.644:   38.1500%  (       80)
00:10:29.597     21.644 -    21.760:   38.9784%  (       90)
00:10:29.597     21.760 -    21.876:   39.6503%  (       73)
00:10:29.597     21.876 -    21.993:   40.5246%  (       95)
00:10:29.597     21.993 -    22.109:   41.2885%  (       83)
00:10:29.597     22.109 -    22.225:   42.3654%  (      117)
00:10:29.597     22.225 -    22.342:   43.3686%  (      109)
00:10:29.597     22.342 -    22.458:   44.3810%  (      110)
00:10:29.597     22.458 -    22.575:   45.3751%  (      108)
00:10:29.597     22.575 -    22.691:   46.4151%  (      113)
00:10:29.597     22.691 -    22.807:   47.3999%  (      107)
00:10:29.597     22.807 -    22.924:   48.4031%  (      109)
00:10:29.597     22.924 -    23.040:   49.4248%  (      111)
00:10:29.597     23.040 -    23.156:   50.6397%  (      132)
00:10:29.597     23.156 -    23.273:   51.6153%  (      106)
00:10:29.597     23.273 -    23.389:   52.7566%  (      124)
00:10:29.597     23.389 -    23.505:   54.1463%  (      151)
00:10:29.597     23.505 -    23.622:   55.4533%  (      142)
00:10:29.597     23.622 -    23.738:   56.8155%  (      148)
00:10:29.597     23.738 -    23.855:   58.1592%  (      146)
00:10:29.597     23.855 -    23.971:   59.4570%  (      141)
00:10:29.597     23.971 -    24.087:   61.0216%  (      170)
00:10:29.597     24.087 -    24.204:   62.6691%  (      179)
00:10:29.597     24.204 -    24.320:   63.9669%  (      141)
00:10:29.597     24.320 -    24.436:   65.5315%  (      170)
00:10:29.597     24.436 -    24.553:   67.0410%  (      164)
00:10:29.597     24.553 -    24.669:   68.2559%  (      132)
00:10:29.597     24.669 -    24.785:   69.5812%  (      144)
00:10:29.597     24.785 -    24.902:   70.8329%  (      136)
00:10:29.597     24.902 -    25.018:   72.0479%  (      132)
00:10:29.597     25.018 -    25.135:   73.5757%  (      166)
00:10:29.597     25.135 -    25.251:   74.8274%  (      136)
00:10:29.597     25.251 -    25.367:   75.8399%  (      110)
00:10:29.597     25.367 -    25.484:   76.9627%  (      122)
00:10:29.597     25.484 -    25.600:   78.3157%  (      147)
00:10:29.597     25.600 -    25.716:   79.2361%  (      100)
00:10:29.597     25.716 -    25.833:   80.3221%  (      118)
00:10:29.597     25.833 -    25.949:   81.2149%  (       97)
00:10:29.597     25.949 -    26.065:   81.8960%  (       74)
00:10:29.597     26.065 -    26.182:   82.4850%  (       64)
00:10:29.597     26.182 -    26.298:   83.1293%  (       70)
00:10:29.597     26.298 -    26.415:   83.7920%  (       72)
00:10:29.597     26.415 -    26.531:   84.5099%  (       78)
00:10:29.597     26.531 -    26.647:   85.0805%  (       62)
00:10:29.597     26.647 -    26.764:   85.5131%  (       47)
00:10:29.597     26.764 -    26.880:   85.8997%  (       42)
00:10:29.597     26.880 -    26.996:   86.3323%  (       47)
00:10:29.597     26.996 -    27.113:   86.6268%  (       32)
00:10:29.597     27.113 -    27.229:   87.0318%  (       44)
00:10:29.597     27.229 -    27.345:   87.2895%  (       28)
00:10:29.597     27.345 -    27.462:   87.5380%  (       27)
00:10:29.597     27.462 -    27.578:   87.8509%  (       34)
00:10:29.597     27.578 -    27.695:   88.1270%  (       30)
00:10:29.597     27.695 -    27.811:   88.3663%  (       26)
00:10:29.597     27.811 -    27.927:   88.5596%  (       21)
00:10:29.597     27.927 -    28.044:   88.7253%  (       18)
00:10:29.597     28.044 -    28.160:   88.8173%  (       10)
00:10:29.597     28.160 -    28.276:   89.0750%  (       28)
00:10:29.597     28.276 -    28.393:   89.2959%  (       24)
00:10:29.597     28.393 -    28.509:   89.4708%  (       19)
00:10:29.597     28.509 -    28.625:   89.5812%  (       12)
00:10:29.597     28.625 -    28.742:   89.8021%  (       24)
00:10:29.597     28.742 -    28.858:   90.0322%  (       25)
00:10:29.597     28.858 -    28.975:   90.1795%  (       16)
00:10:29.597     28.975 -    29.091:   90.3083%  (       14)
00:10:29.597     29.091 -    29.207:   90.4740%  (       18)
00:10:29.597     29.207 -    29.324:   90.5844%  (       12)
00:10:29.597     29.324 -    29.440:   90.7225%  (       15)
00:10:29.597     29.440 -    29.556:   90.8514%  (       14)
00:10:29.597     29.556 -    29.673:   90.9158%  (        7)
00:10:29.597     29.673 -    29.789:   91.0723%  (       17)
00:10:29.597     29.789 -    30.022:   91.4128%  (       37)
00:10:29.597     30.022 -    30.255:   91.6705%  (       28)
00:10:29.597     30.255 -    30.487:   92.0387%  (       40)
00:10:29.597     30.487 -    30.720:   92.3424%  (       33)
00:10:29.597     30.720 -    30.953:   92.6645%  (       35)
00:10:29.597     30.953 -    31.185:   92.9590%  (       32)
00:10:29.597     31.185 -    31.418:   93.2352%  (       30)
00:10:29.597     31.418 -    31.651:   93.5941%  (       39)
00:10:29.597     31.651 -    31.884:   93.9070%  (       34)
00:10:29.597     31.884 -    32.116:   94.2476%  (       37)
00:10:29.597     32.116 -    32.349:   94.4869%  (       26)
00:10:29.597     32.349 -    32.582:   94.7906%  (       33)
00:10:29.597     32.582 -    32.815:   95.0207%  (       25)
00:10:29.597     32.815 -    33.047:   95.2784%  (       28)
00:10:29.597     33.047 -    33.280:   95.5545%  (       30)
00:10:29.597     33.280 -    33.513:   95.7478%  (       21)
00:10:29.597     33.513 -    33.745:   95.9779%  (       25)
00:10:29.597     33.745 -    33.978:   96.1988%  (       24)
00:10:29.597     33.978 -    34.211:   96.3645%  (       18)
00:10:29.597     34.211 -    34.444:   96.5486%  (       20)
00:10:29.597     34.444 -    34.676:   96.6314%  (        9)
00:10:29.597     34.676 -    34.909:   96.7510%  (       13)
00:10:29.597     34.909 -    35.142:   96.8431%  (       10)
00:10:29.597     35.142 -    35.375:   96.9259%  (        9)
00:10:29.597     35.375 -    35.607:   97.0548%  (       14)
00:10:29.597     35.607 -    35.840:   97.2204%  (       18)
00:10:29.597     35.840 -    36.073:   97.3125%  (       10)
00:10:29.597     36.073 -    36.305:   97.3953%  (        9)
00:10:29.597     36.305 -    36.538:   97.4689%  (        8)
00:10:29.597     36.538 -    36.771:   97.5058%  (        4)
00:10:29.597     36.771 -    37.004:   97.5794%  (        8)
00:10:29.597     37.004 -    37.236:   97.6806%  (       11)
00:10:29.597     37.236 -    37.469:   97.7358%  (        6)
00:10:29.598     37.469 -    37.702:   97.7727%  (        4)
00:10:29.598     37.702 -    37.935:   97.8279%  (        6)
00:10:29.598     37.935 -    38.167:   97.9015%  (        8)
00:10:29.598     38.167 -    38.400:   97.9659%  (        7)
00:10:29.598     38.400 -    38.633:   98.0304%  (        7)
00:10:29.598     38.633 -    38.865:   98.0948%  (        7)
00:10:29.598     38.865 -    39.098:   98.1316%  (        4)
00:10:29.598     39.098 -    39.331:   98.2052%  (        8)
00:10:29.598     39.331 -    39.564:   98.2605%  (        6)
00:10:29.598     39.564 -    39.796:   98.3249%  (        7)
00:10:29.598     39.796 -    40.029:   98.3893%  (        7)
00:10:29.598     40.029 -    40.262:   98.5274%  (       15)
00:10:29.598     40.262 -    40.495:   98.5826%  (        6)
00:10:29.598     40.495 -    40.727:   98.6286%  (        5)
00:10:29.598     40.727 -    40.960:   98.7115%  (        9)
00:10:29.598     40.960 -    41.193:   98.7299%  (        2)
00:10:29.598     41.193 -    41.425:   98.7575%  (        3)
00:10:29.598     41.425 -    41.658:   98.7851%  (        3)
00:10:29.598     41.658 -    41.891:   98.8587%  (        8)
00:10:29.598     41.891 -    42.124:   98.9324%  (        8)
00:10:29.598     42.124 -    42.356:   98.9968%  (        7)
00:10:29.598     42.356 -    42.589:   99.0428%  (        5)
00:10:29.598     42.589 -    42.822:   99.0980%  (        6)
00:10:29.598     42.822 -    43.055:   99.1624%  (        7)
00:10:29.598     43.055 -    43.287:   99.1993%  (        4)
00:10:29.598     43.287 -    43.520:   99.2637%  (        7)
00:10:29.598     43.520 -    43.753:   99.3373%  (        8)
00:10:29.598     43.753 -    43.985:   99.3925%  (        6)
00:10:29.598     43.985 -    44.218:   99.4202%  (        3)
00:10:29.598     44.218 -    44.451:   99.4938%  (        8)
00:10:29.598     44.451 -    44.684:   99.5306%  (        4)
00:10:29.598     44.684 -    44.916:   99.5398%  (        1)
00:10:29.598     44.916 -    45.149:   99.5674%  (        3)
00:10:29.598     45.149 -    45.382:   99.5950%  (        3)
00:10:29.598     45.382 -    45.615:   99.6042%  (        1)
00:10:29.598     45.847 -    46.080:   99.6134%  (        1)
00:10:29.598     46.080 -    46.313:   99.6226%  (        1)
00:10:29.598     46.545 -    46.778:   99.6503%  (        3)
00:10:29.598     46.778 -    47.011:   99.6779%  (        3)
00:10:29.598     47.011 -    47.244:   99.7055%  (        3)
00:10:29.598     47.244 -    47.476:   99.7239%  (        2)
00:10:29.598     47.476 -    47.709:   99.7515%  (        3)
00:10:29.598     47.709 -    47.942:   99.7699%  (        2)
00:10:29.598     48.175 -    48.407:   99.7975%  (        3)
00:10:29.598     48.407 -    48.640:   99.8067%  (        1)
00:10:29.598     48.640 -    48.873:   99.8159%  (        1)
00:10:29.598     49.105 -    49.338:   99.8435%  (        3)
00:10:29.598     50.036 -    50.269:   99.8527%  (        1)
00:10:29.598     50.269 -    50.502:   99.8619%  (        1)
00:10:29.598     50.735 -    50.967:   99.8803%  (        2)
00:10:29.598     51.665 -    51.898:   99.8896%  (        1)
00:10:29.598     51.898 -    52.131:   99.8988%  (        1)
00:10:29.598     52.364 -    52.596:   99.9080%  (        1)
00:10:29.598     52.829 -    53.062:   99.9264%  (        2)
00:10:29.598     56.785 -    57.018:   99.9356%  (        1)
00:10:29.598     57.484 -    57.716:   99.9448%  (        1)
00:10:29.598     62.371 -    62.836:   99.9540%  (        1)
00:10:29.598     66.095 -    66.560:   99.9632%  (        1)
00:10:29.598     71.680 -    72.145:   99.9724%  (        1)
00:10:29.598    108.451 -   108.916:   99.9816%  (        1)
00:10:29.598    149.876 -   150.807:   99.9908%  (        1)
00:10:29.598    767.069 -   770.793:  100.0000%  (        1)
00:10:29.598  
00:10:29.598  Complete histogram
00:10:29.598  ==================
00:10:29.598         Range in us     Cumulative     Count
00:10:29.598      9.309 -     9.367:    0.0092%  (        1)
00:10:29.598      9.367 -     9.425:    0.0184%  (        1)
00:10:29.598      9.425 -     9.484:    0.0368%  (        2)
00:10:29.598      9.484 -     9.542:    0.1473%  (       12)
00:10:29.598      9.542 -     9.600:    0.5706%  (       46)
00:10:29.598      9.600 -     9.658:    2.2273%  (      180)
00:10:29.598      9.658 -     9.716:    5.4947%  (      355)
00:10:29.598      9.716 -     9.775:   10.0966%  (      500)
00:10:29.598      9.775 -     9.833:   14.4501%  (      473)
00:10:29.598      9.833 -     9.891:   18.4077%  (      430)
00:10:29.598      9.891 -     9.949:   21.1965%  (      303)
00:10:29.598      9.949 -    10.007:   23.0649%  (      203)
00:10:29.598     10.007 -    10.065:   23.9577%  (       97)
00:10:29.598     10.065 -    10.124:   24.5835%  (       68)
00:10:29.598     10.124 -    10.182:   24.9793%  (       43)
00:10:29.598     10.182 -    10.240:   25.2554%  (       30)
00:10:29.598     10.240 -    10.298:   25.4671%  (       23)
00:10:29.598     10.298 -    10.356:   25.5867%  (       13)
00:10:29.598     10.356 -    10.415:   25.6512%  (        7)
00:10:29.598     10.415 -    10.473:   25.7064%  (        6)
00:10:29.598     10.473 -    10.531:   25.7800%  (        8)
00:10:29.598     10.531 -    10.589:   25.8168%  (        4)
00:10:29.598     10.589 -    10.647:   25.8537%  (        4)
00:10:29.598     10.647 -    10.705:   25.8721%  (        2)
00:10:29.598     10.705 -    10.764:   25.8905%  (        2)
00:10:29.598     10.764 -    10.822:   25.9089%  (        2)
00:10:29.598     10.822 -    10.880:   25.9273%  (        2)
00:10:29.598     10.880 -    10.938:   25.9733%  (        5)
00:10:29.598     10.938 -    10.996:   26.0561%  (        9)
00:10:29.598     10.996 -    11.055:   26.2126%  (       17)
00:10:29.598     11.055 -    11.113:   26.3139%  (       11)
00:10:29.598     11.113 -    11.171:   26.5255%  (       23)
00:10:29.598     11.171 -    11.229:   26.7188%  (       21)
00:10:29.598     11.229 -    11.287:   26.9029%  (       20)
00:10:29.598     11.287 -    11.345:   27.0870%  (       20)
00:10:29.598     11.345 -    11.404:   27.2342%  (       16)
00:10:29.598     11.404 -    11.462:   27.4459%  (       23)
00:10:29.598     11.462 -    11.520:   27.5656%  (       13)
00:10:29.598     11.520 -    11.578:   27.7036%  (       15)
00:10:29.598     11.578 -    11.636:   27.8233%  (       13)
00:10:29.598     11.636 -    11.695:   27.9429%  (       13)
00:10:29.598     11.695 -    11.753:   28.0442%  (       11)
00:10:29.598     11.753 -    11.811:   28.1270%  (        9)
00:10:29.598     11.811 -    11.869:   28.2098%  (        9)
00:10:29.598     11.869 -    11.927:   28.3295%  (       13)
00:10:29.598     11.927 -    11.985:   28.3939%  (        7)
00:10:29.598     11.985 -    12.044:   28.4768%  (        9)
00:10:29.598     12.044 -    12.102:   28.5872%  (       12)
00:10:29.598     12.102 -    12.160:   28.7161%  (       14)
00:10:29.598     12.160 -    12.218:   28.8173%  (       11)
00:10:29.598     12.218 -    12.276:   28.9370%  (       13)
00:10:29.598     12.276 -    12.335:   29.0290%  (       10)
00:10:29.598     12.335 -    12.393:   29.1947%  (       18)
00:10:29.598     12.393 -    12.451:   29.2959%  (       11)
00:10:29.598     12.451 -    12.509:   29.3511%  (        6)
00:10:29.598     12.509 -    12.567:   29.4156%  (        7)
00:10:29.598     12.567 -    12.625:   29.4616%  (        5)
00:10:29.598     12.625 -    12.684:   29.4892%  (        3)
00:10:29.598     12.684 -    12.742:   29.5720%  (        9)
00:10:29.598     12.742 -    12.800:   29.6364%  (        7)
00:10:29.598     12.800 -    12.858:   29.7101%  (        8)
00:10:29.598     12.858 -    12.916:   29.7653%  (        6)
00:10:29.598     12.916 -    12.975:   29.8573%  (       10)
00:10:29.598     12.975 -    13.033:   29.9494%  (       10)
00:10:29.598     13.033 -    13.091:   30.0598%  (       12)
00:10:29.598     13.091 -    13.149:   30.1427%  (        9)
00:10:29.598     13.149 -    13.207:   30.2347%  (       10)
00:10:29.598     13.207 -    13.265:   30.3543%  (       13)
00:10:29.598     13.265 -    13.324:   30.4740%  (       13)
00:10:29.598     13.324 -    13.382:   30.6121%  (       15)
00:10:29.598     13.382 -    13.440:   30.7501%  (       15)
00:10:29.598     13.440 -    13.498:   31.0170%  (       29)
00:10:29.598     13.498 -    13.556:   31.2195%  (       22)
00:10:29.598     13.556 -    13.615:   31.3668%  (       16)
00:10:29.598     13.615 -    13.673:   31.5140%  (       16)
00:10:29.598     13.673 -    13.731:   31.7257%  (       23)
00:10:29.598     13.731 -    13.789:   31.9098%  (       20)
00:10:29.598     13.789 -    13.847:   32.2227%  (       34)
00:10:29.598     13.847 -    13.905:   32.4988%  (       30)
00:10:29.598     13.905 -    13.964:   32.8026%  (       33)
00:10:29.598     13.964 -    14.022:   33.1891%  (       42)
00:10:29.598     14.022 -    14.080:   33.6493%  (       50)
00:10:29.598     14.080 -    14.138:   34.0359%  (       42)
00:10:29.598     14.138 -    14.196:   34.4501%  (       45)
00:10:29.598     14.196 -    14.255:   34.7630%  (       34)
00:10:29.598     14.255 -    14.313:   35.0391%  (       30)
00:10:29.598     14.313 -    14.371:   35.4717%  (       47)
00:10:29.598     14.371 -    14.429:   35.8399%  (       40)
00:10:29.598     14.429 -    14.487:   36.1712%  (       36)
00:10:29.598     14.487 -    14.545:   36.4841%  (       34)
00:10:29.598     14.545 -    14.604:   36.8523%  (       40)
00:10:29.598     14.604 -    14.662:   37.2020%  (       38)
00:10:29.598     14.662 -    14.720:   37.5518%  (       38)
00:10:29.598     14.720 -    14.778:   38.0120%  (       50)
00:10:29.598     14.778 -    14.836:   38.3617%  (       38)
00:10:29.598     14.836 -    14.895:   38.7575%  (       43)
00:10:29.598     14.895 -    15.011:   39.6042%  (       92)
00:10:29.598     15.011 -    15.127:   40.5614%  (      104)
00:10:29.598     15.127 -    15.244:   41.3254%  (       83)
00:10:29.598     15.244 -    15.360:   42.3470%  (      111)
00:10:29.598     15.360 -    15.476:   43.5251%  (      128)
00:10:29.598     15.476 -    15.593:   44.6940%  (      127)
00:10:29.598     15.593 -    15.709:   46.1298%  (      156)
00:10:29.598     15.709 -    15.825:   47.8877%  (      191)
00:10:29.598     15.825 -    15.942:   49.4248%  (      167)
00:10:29.598     15.942 -    16.058:   51.2011%  (      193)
00:10:29.598     16.058 -    16.175:   53.0235%  (      198)
00:10:29.598     16.175 -    16.291:   54.8642%  (      200)
00:10:29.598     16.291 -    16.407:   56.6682%  (      196)
00:10:29.598     16.407 -    16.524:   58.4814%  (      197)
00:10:29.598     16.524 -    16.640:   60.1012%  (      176)
00:10:29.598     16.640 -    16.756:   61.6015%  (      163)
00:10:29.598     16.756 -    16.873:   62.9913%  (      151)
00:10:29.598     16.873 -    16.989:   64.4363%  (      157)
00:10:29.598     16.989 -    17.105:   65.6512%  (      132)
00:10:29.598     17.105 -    17.222:   66.9949%  (      146)
00:10:29.598     17.222 -    17.338:   68.1822%  (      129)
00:10:29.598     17.338 -    17.455:   69.3695%  (      129)
00:10:29.598     17.455 -    17.571:   70.4832%  (      121)
00:10:29.598     17.571 -    17.687:   71.7165%  (      134)
00:10:29.598     17.687 -    17.804:   72.9590%  (      135)
00:10:29.598     17.804 -    17.920:   74.2200%  (      137)
00:10:29.599     17.920 -    18.036:   75.4533%  (      134)
00:10:29.599     18.036 -    18.153:   76.6222%  (      127)
00:10:29.599     18.153 -    18.269:   77.8095%  (      129)
00:10:29.599     18.269 -    18.385:   79.0612%  (      136)
00:10:29.599     18.385 -    18.502:   80.2945%  (      134)
00:10:29.599     18.502 -    18.618:   81.2425%  (      103)
00:10:29.599     18.618 -    18.735:   82.4022%  (      126)
00:10:29.599     18.735 -    18.851:   83.3870%  (      107)
00:10:29.599     18.851 -    18.967:   84.2430%  (       93)
00:10:29.599     18.967 -    19.084:   85.1542%  (       99)
00:10:29.599     19.084 -    19.200:   85.8353%  (       74)
00:10:29.599     19.200 -    19.316:   86.6728%  (       91)
00:10:29.599     19.316 -    19.433:   87.2803%  (       66)
00:10:29.599     19.433 -    19.549:   87.9061%  (       68)
00:10:29.599     19.549 -    19.665:   88.4399%  (       58)
00:10:29.599     19.665 -    19.782:   89.0198%  (       63)
00:10:29.599     19.782 -    19.898:   89.4524%  (       47)
00:10:29.599     19.898 -    20.015:   89.8573%  (       44)
00:10:29.599     20.015 -    20.131:   90.2071%  (       38)
00:10:29.599     20.131 -    20.247:   90.5292%  (       35)
00:10:29.599     20.247 -    20.364:   90.9342%  (       44)
00:10:29.599     20.364 -    20.480:   91.1827%  (       27)
00:10:29.599     20.480 -    20.596:   91.4864%  (       33)
00:10:29.599     20.596 -    20.713:   91.7165%  (       25)
00:10:29.599     20.713 -    20.829:   92.0663%  (       38)
00:10:29.599     20.829 -    20.945:   92.2503%  (       20)
00:10:29.599     20.945 -    21.062:   92.4804%  (       25)
00:10:29.599     21.062 -    21.178:   92.7474%  (       29)
00:10:29.599     21.178 -    21.295:   92.9406%  (       21)
00:10:29.599     21.295 -    21.411:   93.1247%  (       20)
00:10:29.599     21.411 -    21.527:   93.2720%  (       16)
00:10:29.599     21.527 -    21.644:   93.5205%  (       27)
00:10:29.599     21.644 -    21.760:   93.6677%  (       16)
00:10:29.599     21.760 -    21.876:   93.9254%  (       28)
00:10:29.599     21.876 -    21.993:   94.1003%  (       19)
00:10:29.599     21.993 -    22.109:   94.2568%  (       17)
00:10:29.599     22.109 -    22.225:   94.4685%  (       23)
00:10:29.599     22.225 -    22.342:   94.6710%  (       22)
00:10:29.599     22.342 -    22.458:   94.8366%  (       18)
00:10:29.599     22.458 -    22.575:   95.0299%  (       21)
00:10:29.599     22.575 -    22.691:   95.1680%  (       15)
00:10:29.599     22.691 -    22.807:   95.2692%  (       11)
00:10:29.599     22.807 -    22.924:   95.3981%  (       14)
00:10:29.599     22.924 -    23.040:   95.5361%  (       15)
00:10:29.599     23.040 -    23.156:   95.6006%  (        7)
00:10:29.599     23.156 -    23.273:   95.7202%  (       13)
00:10:29.599     23.273 -    23.389:   95.7754%  (        6)
00:10:29.599     23.389 -    23.505:   95.8122%  (        4)
00:10:29.599     23.505 -    23.622:   95.9595%  (       16)
00:10:29.599     23.622 -    23.738:   96.0792%  (       13)
00:10:29.599     23.738 -    23.855:   96.1436%  (        7)
00:10:29.599     23.855 -    23.971:   96.1988%  (        6)
00:10:29.599     23.971 -    24.087:   96.2724%  (        8)
00:10:29.599     24.087 -    24.204:   96.3185%  (        5)
00:10:29.599     24.204 -    24.320:   96.3829%  (        7)
00:10:29.599     24.320 -    24.436:   96.4289%  (        5)
00:10:29.599     24.436 -    24.553:   96.4749%  (        5)
00:10:29.599     24.553 -    24.669:   96.5117%  (        4)
00:10:29.599     24.669 -    24.785:   96.5762%  (        7)
00:10:29.599     24.785 -    24.902:   96.5946%  (        2)
00:10:29.599     24.902 -    25.018:   96.6314%  (        4)
00:10:29.599     25.018 -    25.135:   96.6866%  (        6)
00:10:29.599     25.135 -    25.251:   96.7234%  (        4)
00:10:29.599     25.251 -    25.367:   96.7418%  (        2)
00:10:29.599     25.367 -    25.484:   96.7694%  (        3)
00:10:29.599     25.484 -    25.600:   96.7971%  (        3)
00:10:29.599     25.600 -    25.716:   96.8155%  (        2)
00:10:29.599     25.716 -    25.833:   96.8615%  (        5)
00:10:29.599     25.833 -    25.949:   96.8799%  (        2)
00:10:29.599     25.949 -    26.065:   96.9259%  (        5)
00:10:29.599     26.065 -    26.182:   96.9535%  (        3)
00:10:29.599     26.298 -    26.415:   96.9995%  (        5)
00:10:29.599     26.415 -    26.531:   97.0456%  (        5)
00:10:29.599     26.531 -    26.647:   97.0732%  (        3)
00:10:29.599     26.647 -    26.764:   97.1008%  (        3)
00:10:29.599     26.764 -    26.880:   97.1100%  (        1)
00:10:29.599     26.880 -    26.996:   97.1284%  (        2)
00:10:29.599     26.996 -    27.113:   97.1560%  (        3)
00:10:29.599     27.113 -    27.229:   97.2020%  (        5)
00:10:29.599     27.229 -    27.345:   97.2204%  (        2)
00:10:29.599     27.345 -    27.462:   97.2665%  (        5)
00:10:29.599     27.462 -    27.578:   97.3125%  (        5)
00:10:29.599     27.811 -    27.927:   97.3401%  (        3)
00:10:29.599     27.927 -    28.044:   97.3493%  (        1)
00:10:29.599     28.044 -    28.160:   97.4045%  (        6)
00:10:29.599     28.160 -    28.276:   97.4229%  (        2)
00:10:29.599     28.276 -    28.393:   97.4413%  (        2)
00:10:29.599     28.393 -    28.509:   97.4689%  (        3)
00:10:29.599     28.509 -    28.625:   97.4781%  (        1)
00:10:29.599     28.625 -    28.742:   97.4965%  (        2)
00:10:29.599     28.742 -    28.858:   97.5150%  (        2)
00:10:29.599     28.858 -    28.975:   97.5610%  (        5)
00:10:29.599     28.975 -    29.091:   97.5794%  (        2)
00:10:29.599     29.091 -    29.207:   97.6070%  (        3)
00:10:29.599     29.207 -    29.324:   97.6162%  (        1)
00:10:29.599     29.324 -    29.440:   97.6346%  (        2)
00:10:29.599     29.440 -    29.556:   97.6806%  (        5)
00:10:29.599     29.556 -    29.673:   97.6990%  (        2)
00:10:29.599     29.789 -    30.022:   97.7543%  (        6)
00:10:29.599     30.022 -    30.255:   97.7727%  (        2)
00:10:29.599     30.255 -    30.487:   97.8095%  (        4)
00:10:29.599     30.487 -    30.720:   97.8463%  (        4)
00:10:29.599     30.720 -    30.953:   97.9015%  (        6)
00:10:29.599     30.953 -    31.185:   97.9751%  (        8)
00:10:29.599     31.185 -    31.418:   98.0212%  (        5)
00:10:29.599     31.418 -    31.651:   98.0488%  (        3)
00:10:29.599     31.651 -    31.884:   98.0764%  (        3)
00:10:29.599     31.884 -    32.116:   98.1132%  (        4)
00:10:29.599     32.116 -    32.349:   98.1408%  (        3)
00:10:29.599     32.349 -    32.582:   98.1868%  (        5)
00:10:29.599     32.582 -    32.815:   98.2881%  (       11)
00:10:29.599     32.815 -    33.047:   98.3525%  (        7)
00:10:29.599     33.047 -    33.280:   98.4077%  (        6)
00:10:29.599     33.280 -    33.513:   98.4814%  (        8)
00:10:29.599     33.513 -    33.745:   98.5642%  (        9)
00:10:29.599     33.745 -    33.978:   98.6654%  (       11)
00:10:29.599     33.978 -    34.211:   98.7391%  (        8)
00:10:29.599     34.211 -    34.444:   98.7851%  (        5)
00:10:29.599     34.444 -    34.676:   98.8495%  (        7)
00:10:29.599     34.676 -    34.909:   98.9231%  (        8)
00:10:29.599     34.909 -    35.142:   98.9784%  (        6)
00:10:29.599     35.142 -    35.375:   99.0428%  (        7)
00:10:29.599     35.375 -    35.607:   99.1348%  (       10)
00:10:29.599     35.607 -    35.840:   99.1901%  (        6)
00:10:29.599     35.840 -    36.073:   99.2269%  (        4)
00:10:29.599     36.073 -    36.305:   99.2545%  (        3)
00:10:29.599     36.305 -    36.538:   99.3189%  (        7)
00:10:29.599     36.538 -    36.771:   99.3649%  (        5)
00:10:29.599     36.771 -    37.004:   99.3925%  (        3)
00:10:29.599     37.004 -    37.236:   99.4570%  (        7)
00:10:29.599     37.236 -    37.469:   99.5490%  (       10)
00:10:29.599     37.469 -    37.702:   99.6042%  (        6)
00:10:29.599     37.702 -    37.935:   99.6134%  (        1)
00:10:29.599     37.935 -    38.167:   99.6318%  (        2)
00:10:29.599     38.167 -    38.400:   99.6503%  (        2)
00:10:29.599     38.400 -    38.633:   99.6595%  (        1)
00:10:29.599     38.865 -    39.098:   99.6779%  (        2)
00:10:29.599     39.098 -    39.331:   99.7147%  (        4)
00:10:29.599     39.331 -    39.564:   99.7239%  (        1)
00:10:29.599     39.796 -    40.029:   99.7515%  (        3)
00:10:29.599     40.029 -    40.262:   99.7699%  (        2)
00:10:29.599     40.262 -    40.495:   99.7883%  (        2)
00:10:29.599     40.495 -    40.727:   99.8067%  (        2)
00:10:29.599     40.960 -    41.193:   99.8159%  (        1)
00:10:29.599     41.193 -    41.425:   99.8251%  (        1)
00:10:29.599     41.425 -    41.658:   99.8435%  (        2)
00:10:29.599     41.658 -    41.891:   99.8527%  (        1)
00:10:29.599     41.891 -    42.124:   99.8619%  (        1)
00:10:29.599     42.124 -    42.356:   99.8711%  (        1)
00:10:29.599     43.985 -    44.218:   99.8896%  (        2)
00:10:29.599     44.684 -    44.916:   99.9080%  (        2)
00:10:29.599     45.382 -    45.615:   99.9172%  (        1)
00:10:29.599     45.615 -    45.847:   99.9264%  (        1)
00:10:29.599     46.080 -    46.313:   99.9356%  (        1)
00:10:29.599     47.011 -    47.244:   99.9448%  (        1)
00:10:29.599     57.949 -    58.182:   99.9540%  (        1)
00:10:29.599     60.044 -    60.509:   99.9632%  (        1)
00:10:29.599    109.382 -   109.847:   99.9724%  (        1)
00:10:29.599    142.429 -   143.360:   99.9816%  (        1)
00:10:29.599    147.084 -   148.015:   99.9908%  (        1)
00:10:29.599   4051.316 -  4081.105:  100.0000%  (        1)
00:10:29.599  
00:10:29.599  
00:10:29.599  real	0m1.453s
00:10:29.599  user	0m1.201s
00:10:29.599  sys	0m0.181s
00:10:29.599   14:21:08 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:29.599  ************************************
00:10:29.599  END TEST nvme_overhead
00:10:29.599  ************************************
00:10:29.599   14:21:08 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:10:29.858   14:21:08 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:10:29.858   14:21:08 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:10:29.858   14:21:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:29.858   14:21:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:29.858  ************************************
00:10:29.858  START TEST nvme_arbitration
00:10:29.858  ************************************
00:10:29.858   14:21:08 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:10:33.200  Initializing NVMe Controllers
00:10:33.200  Attached to 0000:00:10.0
00:10:33.200  Attached to 0000:00:11.0
00:10:33.200  Attached to 0000:00:13.0
00:10:33.200  Attached to 0000:00:12.0
00:10:33.200  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:10:33.200  Associating QEMU NVMe Ctrl       (12341               ) with lcore 1
00:10:33.200  Associating QEMU NVMe Ctrl       (12343               ) with lcore 2
00:10:33.200  Associating QEMU NVMe Ctrl       (12342               ) with lcore 3
00:10:33.200  Associating QEMU NVMe Ctrl       (12342               ) with lcore 0
00:10:33.200  Associating QEMU NVMe Ctrl       (12342               ) with lcore 1
00:10:33.200  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:10:33.200  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:10:33.200  Initialization complete. Launching workers.
00:10:33.200  Starting thread on core 1 with urgent priority queue
00:10:33.200  Starting thread on core 2 with urgent priority queue
00:10:33.200  Starting thread on core 3 with urgent priority queue
00:10:33.200  Starting thread on core 0 with urgent priority queue
00:10:33.200  QEMU NVMe Ctrl       (12340               ) core 0:   469.33 IO/s   213.07 secs/100000 ios
00:10:33.200  QEMU NVMe Ctrl       (12342               ) core 0:   469.33 IO/s   213.07 secs/100000 ios
00:10:33.200  QEMU NVMe Ctrl       (12341               ) core 1:   576.00 IO/s   173.61 secs/100000 ios
00:10:33.200  QEMU NVMe Ctrl       (12342               ) core 1:   576.00 IO/s   173.61 secs/100000 ios
00:10:33.200  QEMU NVMe Ctrl       (12343               ) core 2:   704.00 IO/s   142.05 secs/100000 ios
00:10:33.200  QEMU NVMe Ctrl       (12342               ) core 3:   490.67 IO/s   203.80 secs/100000 ios
00:10:33.200  ========================================================
00:10:33.200  
00:10:33.200  
00:10:33.200  real	0m3.559s
00:10:33.200  user	0m9.415s
00:10:33.200  sys	0m0.199s
00:10:33.200   14:21:12 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:33.201  ************************************
00:10:33.201  END TEST nvme_arbitration
00:10:33.201   14:21:12 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:10:33.201  ************************************
00:10:33.465   14:21:12 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:10:33.465   14:21:12 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:10:33.465   14:21:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:33.465   14:21:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:33.465  ************************************
00:10:33.465  START TEST nvme_single_aen
00:10:33.465  ************************************
00:10:33.465   14:21:12 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:10:33.725  Asynchronous Event Request test
00:10:33.725  Attached to 0000:00:10.0
00:10:33.725  Attached to 0000:00:11.0
00:10:33.725  Attached to 0000:00:13.0
00:10:33.725  Attached to 0000:00:12.0
00:10:33.725  Reset controller to setup AER completions for this process
00:10:33.725  Registering asynchronous event callbacks...
00:10:33.725  Getting orig temperature thresholds of all controllers
00:10:33.725  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:33.725  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:33.725  0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:33.725  0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:10:33.725  Setting all controllers temperature threshold low to trigger AER
00:10:33.725  Waiting for all controllers temperature threshold to be set lower
00:10:33.725  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:33.725  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:10:33.725  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:33.725  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:10:33.725  0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:33.725  aer_cb - Resetting Temp Threshold for device: 0000:00:13.0
00:10:33.725  0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:10:33.725  aer_cb - Resetting Temp Threshold for device: 0000:00:12.0
00:10:33.725  Waiting for all controllers to trigger AER and reset threshold
00:10:33.725  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:33.725  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:33.725  0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:33.725  0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:10:33.725  Cleaning up...
00:10:33.725  
00:10:33.725  real	0m0.330s
00:10:33.725  user	0m0.132s
00:10:33.725  sys	0m0.156s
00:10:33.725   14:21:12 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:33.725  ************************************
00:10:33.725  END TEST nvme_single_aen
00:10:33.725  ************************************
00:10:33.725   14:21:12 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:10:33.725   14:21:12 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:10:33.725   14:21:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:33.725   14:21:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:33.725   14:21:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:10:33.725  ************************************
00:10:33.725  START TEST nvme_doorbell_aers
00:10:33.725  ************************************
00:10:33.725   14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:10:33.725   14:21:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:10:33.725   14:21:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:10:33.725   14:21:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:10:33.725    14:21:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:10:33.725    14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:10:33.725    14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:10:33.725    14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:10:33.725     14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:10:33.725     14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:10:33.725    14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:10:33.725    14:21:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:10:33.725   14:21:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:10:33.725   14:21:12 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:10:34.291  [2024-11-20 14:21:13.001599] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:10:44.281  Executing: test_write_invalid_db
00:10:44.281  Waiting for AER completion...
00:10:44.281  Failure: test_write_invalid_db
00:10:44.281  
00:10:44.281  Executing: test_invalid_db_write_overflow_sq
00:10:44.281  Waiting for AER completion...
00:10:44.281  Failure: test_invalid_db_write_overflow_sq
00:10:44.281  
00:10:44.281  Executing: test_invalid_db_write_overflow_cq
00:10:44.281  Waiting for AER completion...
00:10:44.281  Failure: test_invalid_db_write_overflow_cq
00:10:44.281  
00:10:44.281   14:21:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:10:44.281   14:21:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0'
00:10:44.281  [2024-11-20 14:21:23.020173] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:10:54.310  Executing: test_write_invalid_db
00:10:54.310  Waiting for AER completion...
00:10:54.310  Failure: test_write_invalid_db
00:10:54.310  
00:10:54.310  Executing: test_invalid_db_write_overflow_sq
00:10:54.310  Waiting for AER completion...
00:10:54.310  Failure: test_invalid_db_write_overflow_sq
00:10:54.310  
00:10:54.310  Executing: test_invalid_db_write_overflow_cq
00:10:54.310  Waiting for AER completion...
00:10:54.310  Failure: test_invalid_db_write_overflow_cq
00:10:54.310  
00:10:54.310   14:21:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:10:54.310   14:21:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0'
00:10:54.310  [2024-11-20 14:21:33.038034] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:04.281  Executing: test_write_invalid_db
00:11:04.281  Waiting for AER completion...
00:11:04.281  Failure: test_write_invalid_db
00:11:04.281  
00:11:04.281  Executing: test_invalid_db_write_overflow_sq
00:11:04.281  Waiting for AER completion...
00:11:04.281  Failure: test_invalid_db_write_overflow_sq
00:11:04.281  
00:11:04.281  Executing: test_invalid_db_write_overflow_cq
00:11:04.281  Waiting for AER completion...
00:11:04.281  Failure: test_invalid_db_write_overflow_cq
00:11:04.281  
00:11:04.281   14:21:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:11:04.281   14:21:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0'
00:11:04.281  [2024-11-20 14:21:43.082119] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  Executing: test_write_invalid_db
00:11:14.251  Waiting for AER completion...
00:11:14.251  Failure: test_write_invalid_db
00:11:14.251  
00:11:14.251  Executing: test_invalid_db_write_overflow_sq
00:11:14.251  Waiting for AER completion...
00:11:14.251  Failure: test_invalid_db_write_overflow_sq
00:11:14.251  
00:11:14.251  Executing: test_invalid_db_write_overflow_cq
00:11:14.251  Waiting for AER completion...
00:11:14.251  Failure: test_invalid_db_write_overflow_cq
00:11:14.251  
00:11:14.251  
00:11:14.251  real	0m40.247s
00:11:14.251  user	0m34.196s
00:11:14.251  sys	0m5.609s
00:11:14.251   14:21:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:14.251   14:21:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:11:14.251  ************************************
00:11:14.251  END TEST nvme_doorbell_aers
00:11:14.251  ************************************
00:11:14.251    14:21:52 nvme -- nvme/nvme.sh@97 -- # uname
00:11:14.251   14:21:52 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:11:14.251   14:21:52 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:11:14.251   14:21:52 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:11:14.251   14:21:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:14.251   14:21:52 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:14.251  ************************************
00:11:14.251  START TEST nvme_multi_aen
00:11:14.251  ************************************
00:11:14.251   14:21:52 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:11:14.251  [2024-11-20 14:21:53.159387] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.159738] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.159944] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.161623] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.161828] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.162018] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.163606] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.163658] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.163676] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.164970] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.165018] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  [2024-11-20 14:21:53.165037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64997) is not found. Dropping the request.
00:11:14.251  Child process pid: 65513
00:11:14.817  [Child] Asynchronous Event Request test
00:11:14.817  [Child] Attached to 0000:00:10.0
00:11:14.817  [Child] Attached to 0000:00:11.0
00:11:14.817  [Child] Attached to 0000:00:13.0
00:11:14.817  [Child] Attached to 0000:00:12.0
00:11:14.817  [Child] Registering asynchronous event callbacks...
00:11:14.817  [Child] Getting orig temperature thresholds of all controllers
00:11:14.818  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  [Child] Waiting for all controllers to trigger AER and reset threshold
00:11:14.818  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  [Child] 0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  [Child] 0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  [Child] 0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  [Child] Cleaning up...
00:11:14.818  Asynchronous Event Request test
00:11:14.818  Attached to 0000:00:10.0
00:11:14.818  Attached to 0000:00:11.0
00:11:14.818  Attached to 0000:00:13.0
00:11:14.818  Attached to 0000:00:12.0
00:11:14.818  Reset controller to setup AER completions for this process
00:11:14.818  Registering asynchronous event callbacks...
00:11:14.818  Getting orig temperature thresholds of all controllers
00:11:14.818  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:14.818  Setting all controllers temperature threshold low to trigger AER
00:11:14.818  Waiting for all controllers temperature threshold to be set lower
00:11:14.818  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:11:14.818  0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  aer_cb - Resetting Temp Threshold for device: 0000:00:11.0
00:11:14.818  0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  aer_cb - Resetting Temp Threshold for device: 0000:00:13.0
00:11:14.818  0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:14.818  aer_cb - Resetting Temp Threshold for device: 0000:00:12.0
00:11:14.818  Waiting for all controllers to trigger AER and reset threshold
00:11:14.818  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  0000:00:11.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  0000:00:13.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  0000:00:12.0: Current Temperature:         323 Kelvin (50 Celsius)
00:11:14.818  Cleaning up...
00:11:14.818  
00:11:14.818  real	0m0.697s
00:11:14.818  user	0m0.242s
00:11:14.818  sys	0m0.340s
00:11:14.818   14:21:53 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:14.818   14:21:53 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:11:14.818  ************************************
00:11:14.818  END TEST nvme_multi_aen
00:11:14.818  ************************************
00:11:14.818   14:21:53 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:11:14.818   14:21:53 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:14.818   14:21:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:14.818   14:21:53 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:14.818  ************************************
00:11:14.818  START TEST nvme_startup
00:11:14.818  ************************************
00:11:14.818   14:21:53 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:11:15.158  Initializing NVMe Controllers
00:11:15.158  Attached to 0000:00:10.0
00:11:15.158  Attached to 0000:00:11.0
00:11:15.158  Attached to 0000:00:13.0
00:11:15.158  Attached to 0000:00:12.0
00:11:15.158  Initialization complete.
00:11:15.158  Time used:210949.828      (us).
00:11:15.158  ************************************
00:11:15.158  END TEST nvme_startup
00:11:15.158  ************************************
00:11:15.158  
00:11:15.158  real	0m0.306s
00:11:15.158  user	0m0.122s
00:11:15.158  sys	0m0.146s
00:11:15.158   14:21:53 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:15.158   14:21:53 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:11:15.158   14:21:53 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:11:15.158   14:21:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:15.158   14:21:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:15.158   14:21:53 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:15.158  ************************************
00:11:15.158  START TEST nvme_multi_secondary
00:11:15.158  ************************************
00:11:15.158   14:21:53 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:11:15.158   14:21:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65569
00:11:15.158   14:21:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:11:15.158   14:21:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65570
00:11:15.158   14:21:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:11:15.158   14:21:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:11:18.460  Initializing NVMe Controllers
00:11:18.460  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:18.460  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:18.460  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:18.460  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:18.460  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:11:18.460  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:11:18.460  Associating PCIE (0000:00:13.0) NSID 1 with lcore 1
00:11:18.460  Associating PCIE (0000:00:12.0) NSID 1 with lcore 1
00:11:18.460  Associating PCIE (0000:00:12.0) NSID 2 with lcore 1
00:11:18.460  Associating PCIE (0000:00:12.0) NSID 3 with lcore 1
00:11:18.460  Initialization complete. Launching workers.
00:11:18.460  ========================================================
00:11:18.460                                                                             Latency(us)
00:11:18.460  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:18.460  PCIE (0000:00:10.0) NSID 1 from core  1:    4855.53      18.97    3292.86     933.64    8604.88
00:11:18.460  PCIE (0000:00:11.0) NSID 1 from core  1:    4855.53      18.97    3294.28     961.62    9203.78
00:11:18.460  PCIE (0000:00:13.0) NSID 1 from core  1:    4855.53      18.97    3294.17     963.15    9567.21
00:11:18.460  PCIE (0000:00:12.0) NSID 1 from core  1:    4855.53      18.97    3294.17     958.27    8653.54
00:11:18.460  PCIE (0000:00:12.0) NSID 2 from core  1:    4855.53      18.97    3294.16     951.95    8593.14
00:11:18.460  PCIE (0000:00:12.0) NSID 3 from core  1:    4855.53      18.97    3293.99     963.21    9401.49
00:11:18.460  ========================================================
00:11:18.460  Total                                  :   29133.18     113.80    3293.94     933.64    9567.21
00:11:18.460  
00:11:18.716  Initializing NVMe Controllers
00:11:18.716  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:18.716  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:18.716  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:18.716  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:18.716  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:11:18.716  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:11:18.716  Associating PCIE (0000:00:13.0) NSID 1 with lcore 2
00:11:18.716  Associating PCIE (0000:00:12.0) NSID 1 with lcore 2
00:11:18.716  Associating PCIE (0000:00:12.0) NSID 2 with lcore 2
00:11:18.717  Associating PCIE (0000:00:12.0) NSID 3 with lcore 2
00:11:18.717  Initialization complete. Launching workers.
00:11:18.717  ========================================================
00:11:18.717                                                                             Latency(us)
00:11:18.717  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:18.717  PCIE (0000:00:10.0) NSID 1 from core  2:    2142.66       8.37    7464.27    1604.87   18250.08
00:11:18.717  PCIE (0000:00:11.0) NSID 1 from core  2:    2142.66       8.37    7465.03    1513.18   15524.15
00:11:18.717  PCIE (0000:00:13.0) NSID 1 from core  2:    2142.66       8.37    7465.36    1671.07   25530.44
00:11:18.717  PCIE (0000:00:12.0) NSID 1 from core  2:    2142.66       8.37    7464.92    1531.05   15468.98
00:11:18.717  PCIE (0000:00:12.0) NSID 2 from core  2:    2142.66       8.37    7464.53    1507.32   19486.77
00:11:18.717  PCIE (0000:00:12.0) NSID 3 from core  2:    2142.66       8.37    7460.04    1652.96   16666.70
00:11:18.717  ========================================================
00:11:18.717  Total                                  :   12855.94      50.22    7464.03    1507.32   25530.44
00:11:18.717  
00:11:18.974   14:21:57 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65569
00:11:20.873  Initializing NVMe Controllers
00:11:20.873  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:20.873  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:20.873  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:20.873  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:20.873  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:11:20.873  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:11:20.873  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:11:20.873  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:11:20.873  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:11:20.873  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:11:20.873  Initialization complete. Launching workers.
00:11:20.873  ========================================================
00:11:20.873                                                                             Latency(us)
00:11:20.873  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:20.873  PCIE (0000:00:10.0) NSID 1 from core  0:    6168.67      24.10    2591.30     978.39   13592.98
00:11:20.873  PCIE (0000:00:11.0) NSID 1 from core  0:    6168.67      24.10    2592.98     996.22   12787.94
00:11:20.873  PCIE (0000:00:13.0) NSID 1 from core  0:    6168.67      24.10    2592.92     994.81   12444.23
00:11:20.873  PCIE (0000:00:12.0) NSID 1 from core  0:    6168.67      24.10    2592.89     987.07   12421.02
00:11:20.873  PCIE (0000:00:12.0) NSID 2 from core  0:    6168.67      24.10    2592.81     992.68   13372.89
00:11:20.873  PCIE (0000:00:12.0) NSID 3 from core  0:    6168.67      24.10    2592.75     954.52   13765.61
00:11:20.873  ========================================================
00:11:20.873  Total                                  :   37012.00     144.58    2592.61     954.52   13765.61
00:11:20.873  
00:11:20.873   14:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65570
00:11:20.873   14:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65645
00:11:20.873   14:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:11:20.873   14:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65646
00:11:20.873   14:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:11:20.874   14:21:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:11:24.153  Initializing NVMe Controllers
00:11:24.153  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:24.153  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:24.153  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:24.153  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:24.153  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:11:24.153  Associating PCIE (0000:00:11.0) NSID 1 with lcore 1
00:11:24.153  Associating PCIE (0000:00:13.0) NSID 1 with lcore 1
00:11:24.153  Associating PCIE (0000:00:12.0) NSID 1 with lcore 1
00:11:24.153  Associating PCIE (0000:00:12.0) NSID 2 with lcore 1
00:11:24.153  Associating PCIE (0000:00:12.0) NSID 3 with lcore 1
00:11:24.153  Initialization complete. Launching workers.
00:11:24.153  ========================================================
00:11:24.153                                                                             Latency(us)
00:11:24.153  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:24.153  PCIE (0000:00:10.0) NSID 1 from core  1:    4817.58      18.82    3318.78    1100.65    7879.23
00:11:24.153  PCIE (0000:00:11.0) NSID 1 from core  1:    4817.58      18.82    3320.93    1138.27    7803.79
00:11:24.153  PCIE (0000:00:13.0) NSID 1 from core  1:    4817.58      18.82    3320.98    1135.30    7353.51
00:11:24.153  PCIE (0000:00:12.0) NSID 1 from core  1:    4817.58      18.82    3321.45    1183.53    7918.24
00:11:24.153  PCIE (0000:00:12.0) NSID 2 from core  1:    4817.58      18.82    3321.74    1135.64    8124.16
00:11:24.153  PCIE (0000:00:12.0) NSID 3 from core  1:    4817.58      18.82    3321.62    1114.75    8143.63
00:11:24.153  ========================================================
00:11:24.153  Total                                  :   28905.48     112.91    3320.92    1100.65    8143.63
00:11:24.153  
00:11:24.153  Initializing NVMe Controllers
00:11:24.153  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:24.153  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:24.153  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:24.153  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:24.153  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:11:24.153  Associating PCIE (0000:00:11.0) NSID 1 with lcore 0
00:11:24.153  Associating PCIE (0000:00:13.0) NSID 1 with lcore 0
00:11:24.153  Associating PCIE (0000:00:12.0) NSID 1 with lcore 0
00:11:24.153  Associating PCIE (0000:00:12.0) NSID 2 with lcore 0
00:11:24.153  Associating PCIE (0000:00:12.0) NSID 3 with lcore 0
00:11:24.153  Initialization complete. Launching workers.
00:11:24.153  ========================================================
00:11:24.153                                                                             Latency(us)
00:11:24.153  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:24.153  PCIE (0000:00:10.0) NSID 1 from core  0:    4547.17      17.76    3516.20    1068.65    9739.64
00:11:24.153  PCIE (0000:00:11.0) NSID 1 from core  0:    4547.17      17.76    3517.94    1101.68   10070.76
00:11:24.153  PCIE (0000:00:13.0) NSID 1 from core  0:    4547.17      17.76    3517.88    1095.07   10256.66
00:11:24.153  PCIE (0000:00:12.0) NSID 1 from core  0:    4547.17      17.76    3517.71    1080.89   10091.12
00:11:24.153  PCIE (0000:00:12.0) NSID 2 from core  0:    4547.17      17.76    3517.52    1070.76   10252.23
00:11:24.153  PCIE (0000:00:12.0) NSID 3 from core  0:    4547.17      17.76    3517.43    1079.07    9778.66
00:11:24.153  ========================================================
00:11:24.153  Total                                  :   27283.03     106.57    3517.45    1068.65   10256.66
00:11:24.153  
00:11:26.685  Initializing NVMe Controllers
00:11:26.685  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:11:26.685  Attached to NVMe Controller at 0000:00:11.0 [1b36:0010]
00:11:26.685  Attached to NVMe Controller at 0000:00:13.0 [1b36:0010]
00:11:26.685  Attached to NVMe Controller at 0000:00:12.0 [1b36:0010]
00:11:26.685  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:11:26.685  Associating PCIE (0000:00:11.0) NSID 1 with lcore 2
00:11:26.685  Associating PCIE (0000:00:13.0) NSID 1 with lcore 2
00:11:26.685  Associating PCIE (0000:00:12.0) NSID 1 with lcore 2
00:11:26.685  Associating PCIE (0000:00:12.0) NSID 2 with lcore 2
00:11:26.685  Associating PCIE (0000:00:12.0) NSID 3 with lcore 2
00:11:26.685  Initialization complete. Launching workers.
00:11:26.685  ========================================================
00:11:26.685                                                                             Latency(us)
00:11:26.685  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:26.685  PCIE (0000:00:10.0) NSID 1 from core  2:    3317.35      12.96    4820.92    1035.71   20279.15
00:11:26.685  PCIE (0000:00:11.0) NSID 1 from core  2:    3317.35      12.96    4822.09    1008.57   20071.81
00:11:26.685  PCIE (0000:00:13.0) NSID 1 from core  2:    3317.35      12.96    4821.75    1067.46   20249.09
00:11:26.685  PCIE (0000:00:12.0) NSID 1 from core  2:    3317.35      12.96    4821.54    1055.23   23838.50
00:11:26.685  PCIE (0000:00:12.0) NSID 2 from core  2:    3317.35      12.96    4821.81    1003.03   23883.17
00:11:26.685  PCIE (0000:00:12.0) NSID 3 from core  2:    3317.35      12.96    4822.20     906.58   23510.61
00:11:26.685  ========================================================
00:11:26.685  Total                                  :   19904.10      77.75    4821.72     906.58   23883.17
00:11:26.685  
00:11:26.685  ************************************
00:11:26.685  END TEST nvme_multi_secondary
00:11:26.685  ************************************
00:11:26.685   14:22:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65645
00:11:26.685   14:22:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65646
00:11:26.685  
00:11:26.685  real	0m11.292s
00:11:26.685  user	0m18.665s
00:11:26.685  sys	0m1.173s
00:11:26.685   14:22:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:26.685   14:22:05 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:11:26.685   14:22:05 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:11:26.685   14:22:05 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64570 ]]
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1094 -- # kill 64570
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1095 -- # wait 64570
00:11:26.685  [2024-11-20 14:22:05.281446] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.281816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.281886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.281920] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.285171] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.285454] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.285505] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.285535] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.288906] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.288991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.289022] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.289060] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.291662] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.291857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.291882] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685  [2024-11-20 14:22:05.291899] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65512) is not found. Dropping the request.
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:11:26.685   14:22:05 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:26.685   14:22:05 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:26.685  ************************************
00:11:26.685  START TEST bdev_nvme_reset_stuck_adm_cmd
00:11:26.685  ************************************
00:11:26.685   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:11:26.686  * Looking for test storage...
00:11:26.686  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:11:26.686  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:26.686  		--rc genhtml_branch_coverage=1
00:11:26.686  		--rc genhtml_function_coverage=1
00:11:26.686  		--rc genhtml_legend=1
00:11:26.686  		--rc geninfo_all_blocks=1
00:11:26.686  		--rc geninfo_unexecuted_blocks=1
00:11:26.686  		
00:11:26.686  		'
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:11:26.686  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:26.686  		--rc genhtml_branch_coverage=1
00:11:26.686  		--rc genhtml_function_coverage=1
00:11:26.686  		--rc genhtml_legend=1
00:11:26.686  		--rc geninfo_all_blocks=1
00:11:26.686  		--rc geninfo_unexecuted_blocks=1
00:11:26.686  		
00:11:26.686  		'
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:11:26.686  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:26.686  		--rc genhtml_branch_coverage=1
00:11:26.686  		--rc genhtml_function_coverage=1
00:11:26.686  		--rc genhtml_legend=1
00:11:26.686  		--rc geninfo_all_blocks=1
00:11:26.686  		--rc geninfo_unexecuted_blocks=1
00:11:26.686  		
00:11:26.686  		'
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:11:26.686  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:26.686  		--rc genhtml_branch_coverage=1
00:11:26.686  		--rc genhtml_function_coverage=1
00:11:26.686  		--rc genhtml_legend=1
00:11:26.686  		--rc geninfo_all_blocks=1
00:11:26.686  		--rc geninfo_unexecuted_blocks=1
00:11:26.686  		
00:11:26.686  		'
00:11:26.686   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:11:26.686   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:11:26.686   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:11:26.686   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:11:26.686   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:11:26.686    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:11:26.686     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:26.686      14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:11:26.686      14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:11:26.945     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:11:26.945     14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:11:26.945    14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:11:26.945  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65812
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65812
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65812 ']'
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:26.945   14:22:05 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:26.945  [2024-11-20 14:22:05.810134] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:11:26.945  [2024-11-20 14:22:05.810468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65812 ]
00:11:27.203  [2024-11-20 14:22:06.009279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:27.203  [2024-11-20 14:22:06.140083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:27.203  [2024-11-20 14:22:06.140174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:27.203  [2024-11-20 14:22:06.140319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:27.203  [2024-11-20 14:22:06.140335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:28.138   14:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:28.138   14:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:11:28.138   14:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:11:28.138   14:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:28.138   14:22:06 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:28.138  nvme0n1
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:28.138    14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_qdVsl.txt
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:28.138  true
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:28.138    14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732112527
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65835
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:11:28.138   14:22:07 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:30.666  [2024-11-20 14:22:09.067846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:11:30.666  [2024-11-20 14:22:09.068410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:11:30.666  [2024-11-20 14:22:09.068477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:11:30.666  [2024-11-20 14:22:09.068513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:30.666  [2024-11-20 14:22:09.070719] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:11:30.666  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65835
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65835
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65835
00:11:30.666    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:11:30.666    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_qdVsl.txt
00:11:30.666   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:11:30.666    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:11:30.666    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:11:30.666    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:11:30.666     14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:11:30.667     14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:11:30.667      14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:11:30.667     14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:11:30.667      14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:11:30.667     14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_qdVsl.txt
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65812
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65812 ']'
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65812
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:30.667    14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65812
00:11:30.667  killing process with pid 65812
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65812'
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65812
00:11:30.667   14:22:09 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65812
00:11:32.562   14:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:11:32.562   14:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:11:32.562  ************************************
00:11:32.562  END TEST bdev_nvme_reset_stuck_adm_cmd
00:11:32.562  ************************************
00:11:32.562  
00:11:32.562  real	0m5.958s
00:11:32.562  user	0m21.347s
00:11:32.562  sys	0m0.660s
00:11:32.562   14:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:32.562   14:22:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:11:32.562   14:22:11 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:11:32.562   14:22:11 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:11:32.562   14:22:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:32.562   14:22:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:32.562   14:22:11 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:32.563  ************************************
00:11:32.563  START TEST nvme_fio
00:11:32.563  ************************************
00:11:32.563   14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:11:32.563   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:11:32.563   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:11:32.563    14:22:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:11:32.563    14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:11:32.563    14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:11:32.563    14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:32.563     14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:11:32.563     14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:11:32.820    14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:11:32.820    14:22:11 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:11:32.820   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0')
00:11:32.820   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:11:32.820   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:32.820   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:11:32.820   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:33.078   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:11:33.078   14:22:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:33.337   14:22:12 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:33.337   14:22:12 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:33.337    14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:33.337    14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:33.337    14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:33.337   14:22:12 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:11:33.594  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:33.594  fio-3.35
00:11:33.594  Starting 1 thread
00:11:36.877  
00:11:36.877  test: (groupid=0, jobs=1): err= 0: pid=65987: Wed Nov 20 14:22:15 2024
00:11:36.877    read: IOPS=15.1k, BW=59.2MiB/s (62.0MB/s)(118MiB/2001msec)
00:11:36.877      slat (usec): min=4, max=109, avg= 6.32, stdev= 2.48
00:11:36.877      clat (usec): min=317, max=9200, avg=4207.97, stdev=939.18
00:11:36.877       lat (usec): min=323, max=9238, avg=4214.29, stdev=940.28
00:11:36.877      clat percentiles (usec):
00:11:36.877       |  1.00th=[ 2606],  5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3490],
00:11:36.877       | 30.00th=[ 3589], 40.00th=[ 3720], 50.00th=[ 3949], 60.00th=[ 4293],
00:11:36.877       | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5538], 95.00th=[ 6194],
00:11:36.877       | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 8094], 99.95th=[ 8586],
00:11:36.877       | 99.99th=[ 8979]
00:11:36.877     bw (  KiB/s): min=55272, max=60168, per=95.61%, avg=57920.00, stdev=2472.39, samples=3
00:11:36.877     iops        : min=13818, max=15042, avg=14480.00, stdev=618.10, samples=3
00:11:36.877    write: IOPS=15.2k, BW=59.2MiB/s (62.1MB/s)(119MiB/2001msec); 0 zone resets
00:11:36.877      slat (nsec): min=4628, max=60255, avg=6431.05, stdev=2342.68
00:11:36.877      clat (usec): min=287, max=9081, avg=4211.00, stdev=935.57
00:11:36.877       lat (usec): min=294, max=9090, avg=4217.43, stdev=936.65
00:11:36.877      clat percentiles (usec):
00:11:36.877       |  1.00th=[ 2573],  5.00th=[ 3261], 10.00th=[ 3359], 20.00th=[ 3490],
00:11:36.877       | 30.00th=[ 3589], 40.00th=[ 3720], 50.00th=[ 3949], 60.00th=[ 4293],
00:11:36.877       | 70.00th=[ 4555], 80.00th=[ 4752], 90.00th=[ 5604], 95.00th=[ 6194],
00:11:36.877       | 99.00th=[ 7177], 99.50th=[ 7439], 99.90th=[ 8094], 99.95th=[ 8225],
00:11:36.877       | 99.99th=[ 8979]
00:11:36.877     bw (  KiB/s): min=55600, max=59792, per=95.28%, avg=57792.00, stdev=2102.59, samples=3
00:11:36.877     iops        : min=13900, max=14948, avg=14448.00, stdev=525.65, samples=3
00:11:36.877    lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
00:11:36.877    lat (msec)   : 2=0.17%, 4=51.10%, 10=48.70%
00:11:36.877    cpu          : usr=98.65%, sys=0.20%, ctx=22, majf=0, minf=607
00:11:36.877    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:36.877       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:36.877       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:36.877       issued rwts: total=30304,30343,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:36.877       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:36.877  
00:11:36.877  Run status group 0 (all jobs):
00:11:36.877     READ: bw=59.2MiB/s (62.0MB/s), 59.2MiB/s-59.2MiB/s (62.0MB/s-62.0MB/s), io=118MiB (124MB), run=2001-2001msec
00:11:36.877    WRITE: bw=59.2MiB/s (62.1MB/s), 59.2MiB/s-59.2MiB/s (62.1MB/s-62.1MB/s), io=119MiB (124MB), run=2001-2001msec
00:11:36.877  -----------------------------------------------------
00:11:36.877  Suppressions used:
00:11:36.877    count      bytes template
00:11:36.877        1         32 /usr/src/fio/parse.c
00:11:36.877        1          8 libtcmalloc_minimal.so
00:11:36.877  -----------------------------------------------------
00:11:36.877  
00:11:36.877   14:22:15 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:36.877   14:22:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:36.877   14:22:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:11:36.877   14:22:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:37.136   14:22:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0'
00:11:37.136   14:22:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:37.394   14:22:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:37.394   14:22:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:37.394    14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:37.394    14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:37.394    14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:37.394   14:22:16 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096
00:11:37.652  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:37.652  fio-3.35
00:11:37.652  Starting 1 thread
00:11:40.934  
00:11:40.934  test: (groupid=0, jobs=1): err= 0: pid=66053: Wed Nov 20 14:22:19 2024
00:11:40.934    read: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(110MiB/2001msec)
00:11:40.934      slat (nsec): min=4585, max=71666, avg=6902.41, stdev=2893.76
00:11:40.934      clat (usec): min=241, max=9065, avg=4511.48, stdev=913.07
00:11:40.934       lat (usec): min=246, max=9082, avg=4518.38, stdev=914.78
00:11:40.934      clat percentiles (usec):
00:11:40.934       |  1.00th=[ 3097],  5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916],
00:11:40.934       | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4359],
00:11:40.934       | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5735], 95.00th=[ 6587],
00:11:40.934       | 99.00th=[ 7570], 99.50th=[ 7635], 99.90th=[ 7832], 99.95th=[ 7898],
00:11:40.934       | 99.99th=[ 8848]
00:11:40.934     bw (  KiB/s): min=50304, max=61720, per=96.75%, avg=54610.67, stdev=6202.62, samples=3
00:11:40.934     iops        : min=12576, max=15430, avg=13652.67, stdev=1550.65, samples=3
00:11:40.934    write: IOPS=14.1k, BW=55.2MiB/s (57.8MB/s)(110MiB/2001msec); 0 zone resets
00:11:40.934      slat (nsec): min=4675, max=54680, avg=7202.02, stdev=2937.18
00:11:40.934      clat (usec): min=231, max=8953, avg=4520.35, stdev=929.73
00:11:40.934       lat (usec): min=236, max=8969, avg=4527.55, stdev=931.50
00:11:40.934      clat percentiles (usec):
00:11:40.934       |  1.00th=[ 3032],  5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916],
00:11:40.934       | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4178], 60.00th=[ 4359],
00:11:40.934       | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5800], 95.00th=[ 6652],
00:11:40.934       | 99.00th=[ 7570], 99.50th=[ 7635], 99.90th=[ 7832], 99.95th=[ 7898],
00:11:40.934       | 99.99th=[ 8586]
00:11:40.934     bw (  KiB/s): min=50008, max=61880, per=96.73%, avg=54632.00, stdev=6356.11, samples=3
00:11:40.934     iops        : min=12502, max=15470, avg=13658.00, stdev=1589.03, samples=3
00:11:40.934    lat (usec)   : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01%
00:11:40.934    lat (msec)   : 2=0.05%, 4=32.82%, 10=67.09%
00:11:40.934    cpu          : usr=98.65%, sys=0.25%, ctx=14, majf=0, minf=607
00:11:40.934    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:40.934       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:40.934       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:40.934       issued rwts: total=28237,28254,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:40.934       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:40.934  
00:11:40.934  Run status group 0 (all jobs):
00:11:40.934     READ: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=110MiB (116MB), run=2001-2001msec
00:11:40.934    WRITE: bw=55.2MiB/s (57.8MB/s), 55.2MiB/s-55.2MiB/s (57.8MB/s-57.8MB/s), io=110MiB (116MB), run=2001-2001msec
00:11:40.934  -----------------------------------------------------
00:11:40.934  Suppressions used:
00:11:40.934    count      bytes template
00:11:40.934        1         32 /usr/src/fio/parse.c
00:11:40.934        1          8 libtcmalloc_minimal.so
00:11:40.934  -----------------------------------------------------
00:11:40.934  
00:11:40.934   14:22:19 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:40.934   14:22:19 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:40.934   14:22:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0'
00:11:40.934   14:22:19 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:40.934   14:22:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:40.934   14:22:19 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0'
00:11:41.499   14:22:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:41.499   14:22:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:41.499    14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:41.499    14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:41.499    14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:41.499   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:41.500   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:41.500   14:22:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096
00:11:41.500  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:41.500  fio-3.35
00:11:41.500  Starting 1 thread
00:11:44.848  
00:11:44.848  test: (groupid=0, jobs=1): err= 0: pid=66108: Wed Nov 20 14:22:23 2024
00:11:44.848    read: IOPS=12.8k, BW=50.2MiB/s (52.6MB/s)(100MiB/2001msec)
00:11:44.848      slat (nsec): min=4580, max=66176, avg=8478.72, stdev=5067.94
00:11:44.848      clat (usec): min=336, max=13069, avg=4969.36, stdev=1315.14
00:11:44.848       lat (usec): min=342, max=13105, avg=4977.84, stdev=1318.73
00:11:44.848      clat percentiles (usec):
00:11:44.848       |  1.00th=[ 2868],  5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3785],
00:11:44.848       | 30.00th=[ 4080], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4948],
00:11:44.848       | 70.00th=[ 5407], 80.00th=[ 6063], 90.00th=[ 7177], 95.00th=[ 7373],
00:11:44.848       | 99.00th=[ 7898], 99.50th=[ 8094], 99.90th=[ 9896], 99.95th=[11207],
00:11:44.848       | 99.99th=[12911]
00:11:44.848     bw (  KiB/s): min=49456, max=54648, per=100.00%, avg=52408.00, stdev=2668.22, samples=3
00:11:44.848     iops        : min=12364, max=13662, avg=13102.00, stdev=667.06, samples=3
00:11:44.848    write: IOPS=12.8k, BW=50.0MiB/s (52.5MB/s)(100MiB/2001msec); 0 zone resets
00:11:44.848      slat (nsec): min=4679, max=97798, avg=8614.62, stdev=5046.62
00:11:44.848      clat (usec): min=305, max=12815, avg=4970.80, stdev=1310.98
00:11:44.848       lat (usec): min=312, max=12831, avg=4979.41, stdev=1314.52
00:11:44.848      clat percentiles (usec):
00:11:44.848       |  1.00th=[ 2868],  5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3785],
00:11:44.848       | 30.00th=[ 4080], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4948],
00:11:44.848       | 70.00th=[ 5473], 80.00th=[ 6063], 90.00th=[ 7177], 95.00th=[ 7373],
00:11:44.848       | 99.00th=[ 7832], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[11076],
00:11:44.848       | 99.99th=[12518]
00:11:44.848     bw (  KiB/s): min=49696, max=54400, per=100.00%, avg=52504.00, stdev=2481.07, samples=3
00:11:44.848     iops        : min=12424, max=13600, avg=13126.00, stdev=620.27, samples=3
00:11:44.848    lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
00:11:44.848    lat (msec)   : 2=0.11%, 4=27.80%, 10=71.97%, 20=0.08%
00:11:44.848    cpu          : usr=98.75%, sys=0.10%, ctx=6, majf=0, minf=607
00:11:44.848    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:44.848       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:44.848       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:44.848       issued rwts: total=25707,25628,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:44.848       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:44.848  
00:11:44.848  Run status group 0 (all jobs):
00:11:44.849     READ: bw=50.2MiB/s (52.6MB/s), 50.2MiB/s-50.2MiB/s (52.6MB/s-52.6MB/s), io=100MiB (105MB), run=2001-2001msec
00:11:44.849    WRITE: bw=50.0MiB/s (52.5MB/s), 50.0MiB/s-50.0MiB/s (52.5MB/s-52.5MB/s), io=100MiB (105MB), run=2001-2001msec
00:11:44.849  -----------------------------------------------------
00:11:44.849  Suppressions used:
00:11:44.849    count      bytes template
00:11:44.849        1         32 /usr/src/fio/parse.c
00:11:44.849        1          8 libtcmalloc_minimal.so
00:11:44.849  -----------------------------------------------------
00:11:44.849  
00:11:44.849   14:22:23 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:44.849   14:22:23 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:11:44.849   14:22:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0'
00:11:44.849   14:22:23 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:11:45.413   14:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:11:45.413   14:22:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0'
00:11:45.670   14:22:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:11:45.670   14:22:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:45.670    14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:11:45.670    14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:11:45.670    14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:11:45.670   14:22:24 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096
00:11:45.670  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:11:45.670  fio-3.35
00:11:45.670  Starting 1 thread
00:11:49.857  
00:11:49.857  test: (groupid=0, jobs=1): err= 0: pid=66181: Wed Nov 20 14:22:28 2024
00:11:49.857    read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2001msec)
00:11:49.857      slat (nsec): min=4579, max=82822, avg=7017.79, stdev=3013.95
00:11:49.857      clat (usec): min=274, max=8607, avg=4633.33, stdev=1122.07
00:11:49.857       lat (usec): min=279, max=8613, avg=4640.35, stdev=1123.41
00:11:49.857      clat percentiles (usec):
00:11:49.857       |  1.00th=[ 2474],  5.00th=[ 3032], 10.00th=[ 3359], 20.00th=[ 3720],
00:11:49.857       | 30.00th=[ 3916], 40.00th=[ 4178], 50.00th=[ 4555], 60.00th=[ 4817],
00:11:49.857       | 70.00th=[ 5080], 80.00th=[ 5604], 90.00th=[ 6194], 95.00th=[ 6849],
00:11:49.857       | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8291], 99.95th=[ 8356],
00:11:49.857       | 99.99th=[ 8455]
00:11:49.857     bw (  KiB/s): min=51192, max=54826, per=96.34%, avg=53080.67, stdev=1821.24, samples=3
00:11:49.857     iops        : min=12798, max=13706, avg=13270.00, stdev=455.07, samples=3
00:11:49.857    write: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2001msec); 0 zone resets
00:11:49.857      slat (usec): min=4, max=143, avg= 7.15, stdev= 3.12
00:11:49.857      clat (usec): min=233, max=8724, avg=4630.68, stdev=1124.53
00:11:49.857       lat (usec): min=239, max=8731, avg=4637.83, stdev=1125.85
00:11:49.857      clat percentiles (usec):
00:11:49.857       |  1.00th=[ 2474],  5.00th=[ 3032], 10.00th=[ 3359], 20.00th=[ 3687],
00:11:49.857       | 30.00th=[ 3916], 40.00th=[ 4178], 50.00th=[ 4555], 60.00th=[ 4817],
00:11:49.857       | 70.00th=[ 5080], 80.00th=[ 5538], 90.00th=[ 6194], 95.00th=[ 6849],
00:11:49.857       | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 8160], 99.95th=[ 8356],
00:11:49.857       | 99.99th=[ 8586]
00:11:49.857     bw (  KiB/s): min=50952, max=55169, per=96.64%, avg=53181.67, stdev=2118.92, samples=3
00:11:49.857     iops        : min=12738, max=13792, avg=13295.33, stdev=529.61, samples=3
00:11:49.857    lat (usec)   : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01%
00:11:49.857    lat (msec)   : 2=0.19%, 4=33.38%, 10=66.38%
00:11:49.857    cpu          : usr=98.55%, sys=0.00%, ctx=22, majf=0, minf=605
00:11:49.857    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:11:49.857       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:49.857       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:11:49.857       issued rwts: total=27562,27528,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:49.857       latency   : target=0, window=0, percentile=100.00%, depth=128
00:11:49.857  
00:11:49.857  Run status group 0 (all jobs):
00:11:49.857     READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2001-2001msec
00:11:49.857    WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2001-2001msec
00:11:49.857  -----------------------------------------------------
00:11:49.857  Suppressions used:
00:11:49.857    count      bytes template
00:11:49.857        1         32 /usr/src/fio/parse.c
00:11:49.857        1          8 libtcmalloc_minimal.so
00:11:49.857  -----------------------------------------------------
00:11:49.857  
00:11:49.857   14:22:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:11:49.857   14:22:28 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:11:49.857  
00:11:49.857  real	0m17.066s
00:11:49.857  user	0m13.622s
00:11:49.857  sys	0m2.209s
00:11:49.857   14:22:28 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:49.857   14:22:28 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:11:49.857  ************************************
00:11:49.857  END TEST nvme_fio
00:11:49.857  ************************************
00:11:49.857  
00:11:49.857  real	1m32.250s
00:11:49.857  user	3m49.471s
00:11:49.857  sys	0m14.843s
00:11:49.857   14:22:28 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:49.857   14:22:28 nvme -- common/autotest_common.sh@10 -- # set +x
00:11:49.857  ************************************
00:11:49.857  END TEST nvme
00:11:49.857  ************************************
00:11:49.857   14:22:28  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:11:49.857   14:22:28  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:11:49.857   14:22:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:49.857   14:22:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:49.857   14:22:28  -- common/autotest_common.sh@10 -- # set +x
00:11:49.857  ************************************
00:11:49.857  START TEST nvme_scc
00:11:49.857  ************************************
00:11:49.857   14:22:28 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:11:49.857  * Looking for test storage...
00:11:49.857  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:11:49.857      14:22:28 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version
00:11:49.857      14:22:28 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@345 -- # : 1
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:49.857     14:22:28 nvme_scc -- scripts/common.sh@368 -- # return 0
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:11:49.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.857  		--rc genhtml_branch_coverage=1
00:11:49.857  		--rc genhtml_function_coverage=1
00:11:49.857  		--rc genhtml_legend=1
00:11:49.857  		--rc geninfo_all_blocks=1
00:11:49.857  		--rc geninfo_unexecuted_blocks=1
00:11:49.857  		
00:11:49.857  		'
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:11:49.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.857  		--rc genhtml_branch_coverage=1
00:11:49.857  		--rc genhtml_function_coverage=1
00:11:49.857  		--rc genhtml_legend=1
00:11:49.857  		--rc geninfo_all_blocks=1
00:11:49.857  		--rc geninfo_unexecuted_blocks=1
00:11:49.857  		
00:11:49.857  		'
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:11:49.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.857  		--rc genhtml_branch_coverage=1
00:11:49.857  		--rc genhtml_function_coverage=1
00:11:49.857  		--rc genhtml_legend=1
00:11:49.857  		--rc geninfo_all_blocks=1
00:11:49.857  		--rc geninfo_unexecuted_blocks=1
00:11:49.857  		
00:11:49.857  		'
00:11:49.857     14:22:28 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:11:49.857  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:49.857  		--rc genhtml_branch_coverage=1
00:11:49.857  		--rc genhtml_function_coverage=1
00:11:49.857  		--rc genhtml_legend=1
00:11:49.857  		--rc geninfo_all_blocks=1
00:11:49.857  		--rc geninfo_unexecuted_blocks=1
00:11:49.857  		
00:11:49.857  		'
00:11:49.857    14:22:28 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:11:49.857       14:22:28 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:11:49.857      14:22:28 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:49.857      14:22:28 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:49.857       14:22:28 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.857       14:22:28 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.857       14:22:28 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.857       14:22:28 nvme_scc -- paths/export.sh@5 -- # export PATH
00:11:49.857       14:22:28 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:11:49.857     14:22:28 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:11:49.857    14:22:28 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:49.857    14:22:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:11:49.857   14:22:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:11:49.857   14:22:28 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:11:49.857   14:22:28 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:11:50.423  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:11:50.423  Waiting for block devices as requested
00:11:50.423  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:11:50.423  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:11:50.681  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:11:50.681  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:11:55.945  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:11:55.945   14:22:34 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:11:55.945   14:22:34 nvme_scc -- scripts/common.sh@18 -- # local i
00:11:55.945   14:22:34 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:11:55.945   14:22:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:11:55.945   14:22:34 nvme_scc -- scripts/common.sh@27 -- # return 0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:11:55.945    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.945   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:11:55.946    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.946   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:11:55.947    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.947   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.948   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:11:55.948    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:55.949    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.949   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:11:55.950    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:11:55.950   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:11:55.951   14:22:34 nvme_scc -- scripts/common.sh@18 -- # local i
00:11:55.951   14:22:34 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:11:55.951   14:22:34 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:11:55.951   14:22:34 nvme_scc -- scripts/common.sh@27 -- # return 0
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:11:55.951   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:11:55.951    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:55.952   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:11:55.952    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:11:56.218   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.218   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.218   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:11:56.219    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.219   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()'
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"'
00:11:56.220    14:22:34 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.220   14:22:34 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:11:56.220   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"'
00:11:56.221    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0
00:11:56.221   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.222   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:11:56.222    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:11:56.223    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.223   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0
00:11:56.224   14:22:35 nvme_scc -- scripts/common.sh@18 -- # local i
00:11:56.224   14:22:35 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:11:56.224   14:22:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:11:56.224   14:22:35 nvme_scc -- scripts/common.sh@27 -- # return 0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()'
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12342                ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342               "'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342               '
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl                          "'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl                          '
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0   "'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0   '
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"'
00:11:56.224    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.224   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"'
00:11:56.225    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0
00:11:56.225   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12342 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"'
00:11:56.226    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.226   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-'
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=-
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()'
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.227   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"'
00:11:56.227    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"'
00:11:56.492    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.492   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:56.493    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]]
00:11:56.493   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()'
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"'
00:11:56.494    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.494   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()'
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"'
00:11:56.495    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.495   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.496   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.496    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()'
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"'
00:11:56.497    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.497   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()'
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"'
00:11:56.498    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.498   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"'
00:11:56.499    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.499   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()'
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"'
00:11:56.500    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.500   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0
00:11:56.501   14:22:35 nvme_scc -- scripts/common.sh@18 -- # local i
00:11:56.501   14:22:35 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:11:56.501   14:22:35 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:11:56.501   14:22:35 nvme_scc -- scripts/common.sh@27 -- # return 0
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@18 -- # shift
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()'
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501    14:22:35 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.501   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12343                ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343               "'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343               '
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl                          "'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl                          '
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0   "'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0   '
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x2 ]]
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"'
00:11:56.502    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.502   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x88010 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.762   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"'
00:11:56.762    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"'
00:11:56.763    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.763   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"'
00:11:56.764    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.764   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:fdp-subsys3 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-'
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"'
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=-
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3
00:11:56.765   14:22:35 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 ))
00:11:56.765    14:22:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:11:56.765    14:22:35 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:11:56.765     14:22:35 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:11:56.765     14:22:35 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 ))
00:11:56.765     14:22:35 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:11:56.765      14:22:35 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:11:56.765     14:22:35 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:11:56.765     14:22:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:11:56.766      14:22:35 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:11:56.766     14:22:35 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2
00:11:56.766    14:22:35 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 ))
00:11:56.766    14:22:35 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1
00:11:56.766    14:22:35 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:11:56.766   14:22:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1
00:11:56.766   14:22:35 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:11:56.766   14:22:35 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:11:57.332  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:11:57.899  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:11:57.899  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:11:57.899  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:11:57.899  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:11:57.899   14:22:36 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:11:57.899   14:22:36 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:11:57.899   14:22:36 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:57.899   14:22:36 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:11:57.899  ************************************
00:11:57.899  START TEST nvme_simple_copy
00:11:57.899  ************************************
00:11:57.899   14:22:36 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:11:58.158  Initializing NVMe Controllers
00:11:58.158  Attaching to 0000:00:10.0
00:11:58.158  Controller supports SCC. Attached to 0000:00:10.0
00:11:58.158    Namespace ID: 1 size: 6GB
00:11:58.158  Initialization complete.
00:11:58.158  
00:11:58.158  Controller QEMU NVMe Ctrl       (12340               )
00:11:58.158  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:11:58.158  Namespace Block Size:4096
00:11:58.158  Writing LBAs 0 to 63 with Random Data
00:11:58.158  Copied LBAs from 0 - 63 to the Destination LBA 256
00:11:58.158  LBAs matching Written Data: 64
00:11:58.158  
00:11:58.158  real	0m0.320s
00:11:58.158  user	0m0.134s
00:11:58.158  sys	0m0.083s
00:11:58.158   14:22:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:58.158   14:22:37 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:11:58.158  ************************************
00:11:58.158  END TEST nvme_simple_copy
00:11:58.158  ************************************
00:11:58.158  
00:11:58.158  real	0m8.466s
00:11:58.158  user	0m1.626s
00:11:58.158  sys	0m1.606s
00:11:58.158   14:22:37 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:58.158   14:22:37 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:11:58.158  ************************************
00:11:58.158  END TEST nvme_scc
00:11:58.158  ************************************
00:11:58.416   14:22:37  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:11:58.416   14:22:37  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:11:58.416   14:22:37  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:11:58.416   14:22:37  -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]]
00:11:58.416   14:22:37  -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh
00:11:58.416   14:22:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:58.416   14:22:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:58.416   14:22:37  -- common/autotest_common.sh@10 -- # set +x
00:11:58.416  ************************************
00:11:58.416  START TEST nvme_fdp
00:11:58.416  ************************************
00:11:58.416   14:22:37 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh
00:11:58.416  * Looking for test storage...
00:11:58.416  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:11:58.416     14:22:37 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:11:58.416      14:22:37 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:11:58.417      14:22:37 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version
00:11:58.417     14:22:37 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-:
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-:
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<'
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@345 -- # : 1
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@365 -- # decimal 1
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@353 -- # local d=1
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@355 -- # echo 1
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@366 -- # decimal 2
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@353 -- # local d=2
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@355 -- # echo 2
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:58.417     14:22:37 nvme_fdp -- scripts/common.sh@368 -- # return 0
00:11:58.417     14:22:37 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:58.417     14:22:37 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:11:58.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:58.417  		--rc genhtml_branch_coverage=1
00:11:58.417  		--rc genhtml_function_coverage=1
00:11:58.417  		--rc genhtml_legend=1
00:11:58.417  		--rc geninfo_all_blocks=1
00:11:58.417  		--rc geninfo_unexecuted_blocks=1
00:11:58.417  		
00:11:58.417  		'
00:11:58.417     14:22:37 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:11:58.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:58.417  		--rc genhtml_branch_coverage=1
00:11:58.417  		--rc genhtml_function_coverage=1
00:11:58.417  		--rc genhtml_legend=1
00:11:58.417  		--rc geninfo_all_blocks=1
00:11:58.417  		--rc geninfo_unexecuted_blocks=1
00:11:58.417  		
00:11:58.417  		'
00:11:58.417     14:22:37 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:11:58.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:58.417  		--rc genhtml_branch_coverage=1
00:11:58.417  		--rc genhtml_function_coverage=1
00:11:58.417  		--rc genhtml_legend=1
00:11:58.417  		--rc geninfo_all_blocks=1
00:11:58.417  		--rc geninfo_unexecuted_blocks=1
00:11:58.417  		
00:11:58.417  		'
00:11:58.417     14:22:37 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:11:58.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:58.417  		--rc genhtml_branch_coverage=1
00:11:58.417  		--rc genhtml_function_coverage=1
00:11:58.417  		--rc genhtml_legend=1
00:11:58.417  		--rc geninfo_all_blocks=1
00:11:58.417  		--rc geninfo_unexecuted_blocks=1
00:11:58.417  		
00:11:58.417  		'
00:11:58.417    14:22:37 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:11:58.417       14:22:37 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:11:58.417      14:22:37 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:58.417      14:22:37 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:58.417       14:22:37 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:58.417       14:22:37 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:58.417       14:22:37 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:58.417       14:22:37 nvme_fdp -- paths/export.sh@5 -- # export PATH
00:11:58.417       14:22:37 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=()
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=()
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=()
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:11:58.417     14:22:37 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name=
00:11:58.417    14:22:37 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:11:58.417   14:22:37 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:11:58.676  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:11:58.934  Waiting for block devices as requested
00:11:58.934  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:11:59.192  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:11:59.192  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:11:59.192  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:12:04.465  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:12:04.465   14:22:43 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0
00:12:04.465   14:22:43 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:04.465   14:22:43 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:12:04.465   14:22:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:04.465   14:22:43 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.465    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:12:04.465    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:12:04.465    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12341                ]]
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341               "'
00:12:04.465    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341               '
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:12:04.465    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.465   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:04.466   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:12:04.466    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:12:04.467    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:12:04.467   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12341 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()'
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"'
00:12:04.468    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:04.468   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"'
00:12:04.469    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:04.469   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:12:04.470    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.470   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:04.471    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.471   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:12:04.472   14:22:43 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:04.472   14:22:43 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:04.472   14:22:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:04.472   14:22:43 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()'
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340               "'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340               '
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl                          "'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl                          '
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0   "'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0   '
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:04.472   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"'
00:12:04.472    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"'
00:12:04.473    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.473   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"'
00:12:04.474    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"'
00:12:04.474    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"'
00:12:04.474    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.474   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"'
00:12:04.737    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7
00:12:04.737   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=-
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()'
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"'
00:12:04.738    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:04.738   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:04.739    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:04.739   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()'
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x17a17a ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"'
00:12:04.740    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.740   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0   lbads:12 rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0   lbads:12 rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0 (in use) ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64  lbads:12 rp:0 (in use)"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64  lbads:12 rp:0 (in use)'
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0
00:12:04.741   14:22:43 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:04.741   14:22:43 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:12:04.741   14:22:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:04.741   14:22:43 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()'
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"'
00:12:04.741    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.741   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12342                ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342               "'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342               '
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl                          "'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl                          '
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0   "'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0   '
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.742   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"'
00:12:04.742    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"'
00:12:04.743    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.743   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12342 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"'
00:12:04.744    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=-
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]]
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()'
00:12:04.744   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"'
00:12:04.745    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.745   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()'
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.746   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"'
00:12:04.746    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"'
00:12:04.747    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:04.747   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()'
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"'
00:12:05.010    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.010   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.011   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"'
00:12:05.011    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()'
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"'
00:12:05.012    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.012   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:05.013    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]]
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2
00:12:05.013   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()'
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"'
00:12:05.014    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0
00:12:05.014   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")*
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()'
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100000 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:12:05.015   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"'
00:12:05.015    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0   lbads:9  rp:0 "'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0   lbads:9  rp:0 '
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8   lbads:9  rp:0 "'
00:12:05.016    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8   lbads:9  rp:0 '
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:12:05.016   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16  lbads:9  rp:0 "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16  lbads:9  rp:0 '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64  lbads:9  rp:0 "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64  lbads:9  rp:0 '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8   lbads:12 rp:0 "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8   lbads:12 rp:0 '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16  lbads:12 rp:0 "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16  lbads:12 rp:0 '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64  lbads:12 rp:0 "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64  lbads:12 rp:0 '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0
00:12:05.017   14:22:43 nvme_fdp -- scripts/common.sh@18 -- # local i
00:12:05.017   14:22:43 nvme_fdp -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:12:05.017   14:22:43 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:05.017   14:22:43 nvme_fdp -- scripts/common.sh@27 -- # return 0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@18 -- # shift
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()'
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  12343                ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343               "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343               '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl                          "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl                          '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0   "'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0   '
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x2 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x88010 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"'
00:12:05.017    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.017   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"'
00:12:05.018    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.018   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"'
00:12:05.019    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:fdp-subsys3 ]]
00:12:05.019   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-'
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]]
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"'
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=-
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # IFS=:
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3
00:12:05.020   14:22:43 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 ))
00:12:05.020    14:22:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 ))
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]]
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]]
00:12:05.020      14:22:43 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000
00:12:05.020     14:22:43 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 ))
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 ))
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3
00:12:05.020    14:22:43 nvme_fdp -- nvme/functions.sh@209 -- # return 0
00:12:05.020   14:22:43 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3
00:12:05.020   14:22:43 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0
00:12:05.020   14:22:43 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:05.587  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:06.182  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:12:06.182  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:12:06.182  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:12:06.182  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:12:06.182   14:22:45 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0'
00:12:06.182   14:22:45 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:12:06.182   14:22:45 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:06.182   14:22:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x
00:12:06.182  ************************************
00:12:06.182  START TEST nvme_flexible_data_placement
00:12:06.182  ************************************
00:12:06.182   14:22:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0'
00:12:06.441  Initializing NVMe Controllers
00:12:06.441  Attaching to 0000:00:13.0
00:12:06.441  Controller supports FDP Attached to 0000:00:13.0
00:12:06.441  Namespace ID: 1 Endurance Group ID: 1
00:12:06.441  Initialization complete.
00:12:06.441  
00:12:06.441  ==================================
00:12:06.441  == FDP tests for Namespace: #01 ==
00:12:06.441  ==================================
00:12:06.441  
00:12:06.441  Get Feature: FDP:
00:12:06.441  =================
00:12:06.441    Enabled:                 Yes
00:12:06.441    FDP configuration Index: 0
00:12:06.441  
00:12:06.441  FDP configurations log page
00:12:06.441  ===========================
00:12:06.441  Number of FDP configurations:         1
00:12:06.441  Version:                              0
00:12:06.441  Size:                                 112
00:12:06.441  FDP Configuration Descriptor:         0
00:12:06.441    Descriptor Size:                    96
00:12:06.441    Reclaim Group Identifier format:    2
00:12:06.441    FDP Volatile Write Cache:           Not Present
00:12:06.441    FDP Configuration:                  Valid
00:12:06.441    Vendor Specific Size:               0
00:12:06.441    Number of Reclaim Groups:           2
00:12:06.441    Number of Recalim Unit Handles:     8
00:12:06.441    Max Placement Identifiers:          128
00:12:06.441    Number of Namespaces Suppprted:     256
00:12:06.441    Reclaim unit Nominal Size:          6000000 bytes
00:12:06.441    Estimated Reclaim Unit Time Limit:  Not Reported
00:12:06.441      RUH Desc #000:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #001:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #002:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #003:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #004:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #005:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #006:          RUH Type: Initially Isolated
00:12:06.441      RUH Desc #007:          RUH Type: Initially Isolated
00:12:06.441  
00:12:06.441  FDP reclaim unit handle usage log page
00:12:06.441  ======================================
00:12:06.441  Number of Reclaim Unit Handles:       8
00:12:06.441    RUH Usage Desc #000:   RUH Attributes: Controller Specified
00:12:06.441    RUH Usage Desc #001:   RUH Attributes: Unused
00:12:06.441    RUH Usage Desc #002:   RUH Attributes: Unused
00:12:06.441    RUH Usage Desc #003:   RUH Attributes: Unused
00:12:06.441    RUH Usage Desc #004:   RUH Attributes: Unused
00:12:06.441    RUH Usage Desc #005:   RUH Attributes: Unused
00:12:06.441    RUH Usage Desc #006:   RUH Attributes: Unused
00:12:06.441    RUH Usage Desc #007:   RUH Attributes: Unused
00:12:06.441  
00:12:06.441  FDP statistics log page
00:12:06.441  =======================
00:12:06.441  Host bytes with metadata written:  770322432
00:12:06.441  Media bytes with metadata written: 770400256
00:12:06.441  Media bytes erased:                0
00:12:06.441  
00:12:06.441  FDP Reclaim unit handle status
00:12:06.441  ==============================
00:12:06.441  Number of RUHS descriptors:   2
00:12:06.441  RUHS Desc: #0000  PID: 0x0000  RUHID: 0x0000  ERUT: 0x00000000  RUAMW: 0x000000000000215d
00:12:06.441  RUHS Desc: #0001  PID: 0x4000  RUHID: 0x0000  ERUT: 0x00000000  RUAMW: 0x0000000000006000
00:12:06.441  
00:12:06.441  FDP write on placement id: 0 success
00:12:06.441  
00:12:06.441  Set Feature: Enabling FDP events on Placement handle: #0 Success
00:12:06.441  
00:12:06.441  IO mgmt send: RUH update for Placement ID: #0 Success
00:12:06.441  
00:12:06.441  Get Feature: FDP Events for Placement handle: #0
00:12:06.441  ========================
00:12:06.441  Number of FDP Events: 6
00:12:06.441  FDP Event: #0  Type: RU Not Written to Capacity     Enabled: Yes
00:12:06.441  FDP Event: #1  Type: RU Time Limit Exceeded         Enabled: Yes
00:12:06.441  FDP Event: #2  Type: Ctrlr Reset Modified RUH's     Enabled: Yes
00:12:06.441  FDP Event: #3  Type: Invalid Placement Identifier   Enabled: Yes
00:12:06.441  FDP Event: #4  Type: Media Reallocated              Enabled: No
00:12:06.441  FDP Event: #5  Type: Implicitly modified RUH        Enabled: No
00:12:06.441  
00:12:06.441  FDP events log page
00:12:06.441  ===================
00:12:06.441  Number of FDP events: 1
00:12:06.441  FDP Event #0:
00:12:06.441    Event Type:                      RU Not Written to Capacity
00:12:06.441    Placement Identifier:            Valid
00:12:06.441    NSID:                            Valid
00:12:06.441    Location:                        Valid
00:12:06.441    Placement Identifier:            0
00:12:06.441    Event Timestamp:                 8
00:12:06.441    Namespace Identifier:            1
00:12:06.441    Reclaim Group Identifier:        0
00:12:06.441    Reclaim Unit Handle Identifier:  0
00:12:06.441  
00:12:06.441  FDP test passed
00:12:06.441  
00:12:06.441  real	0m0.308s
00:12:06.441  user	0m0.113s
00:12:06.441  sys	0m0.092s
00:12:06.441   14:22:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:06.441   14:22:45 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x
00:12:06.441  ************************************
00:12:06.441  END TEST nvme_flexible_data_placement
00:12:06.441  ************************************
00:12:06.441  
00:12:06.441  real	0m8.231s
00:12:06.441  user	0m1.631s
00:12:06.441  sys	0m1.594s
00:12:06.441   14:22:45 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:06.441   14:22:45 nvme_fdp -- common/autotest_common.sh@10 -- # set +x
00:12:06.441  ************************************
00:12:06.441  END TEST nvme_fdp
00:12:06.441  ************************************
00:12:06.441   14:22:45  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:12:06.441   14:22:45  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:12:06.441   14:22:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:06.441   14:22:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:06.441   14:22:45  -- common/autotest_common.sh@10 -- # set +x
00:12:06.441  ************************************
00:12:06.441  START TEST nvme_rpc
00:12:06.441  ************************************
00:12:06.441   14:22:45 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:12:06.700  * Looking for test storage...
00:12:06.700  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:06.700     14:22:45 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:12:06.700     14:22:45 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:06.700     14:22:45 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:06.700    14:22:45 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:06.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.700  		--rc genhtml_branch_coverage=1
00:12:06.700  		--rc genhtml_function_coverage=1
00:12:06.700  		--rc genhtml_legend=1
00:12:06.700  		--rc geninfo_all_blocks=1
00:12:06.700  		--rc geninfo_unexecuted_blocks=1
00:12:06.700  		
00:12:06.700  		'
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:06.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.700  		--rc genhtml_branch_coverage=1
00:12:06.700  		--rc genhtml_function_coverage=1
00:12:06.700  		--rc genhtml_legend=1
00:12:06.700  		--rc geninfo_all_blocks=1
00:12:06.700  		--rc geninfo_unexecuted_blocks=1
00:12:06.700  		
00:12:06.700  		'
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:06.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.700  		--rc genhtml_branch_coverage=1
00:12:06.700  		--rc genhtml_function_coverage=1
00:12:06.700  		--rc genhtml_legend=1
00:12:06.700  		--rc geninfo_all_blocks=1
00:12:06.700  		--rc geninfo_unexecuted_blocks=1
00:12:06.700  		
00:12:06.700  		'
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:06.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:06.700  		--rc genhtml_branch_coverage=1
00:12:06.700  		--rc genhtml_function_coverage=1
00:12:06.700  		--rc genhtml_legend=1
00:12:06.700  		--rc geninfo_all_blocks=1
00:12:06.700  		--rc geninfo_unexecuted_blocks=1
00:12:06.700  		
00:12:06.700  		'
00:12:06.700   14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:06.700    14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:12:06.700    14:22:45 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:12:06.700     14:22:45 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:12:06.700     14:22:45 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:12:06.701     14:22:45 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:12:06.701     14:22:45 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:06.701      14:22:45 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:12:06.701      14:22:45 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:12:06.701     14:22:45 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 ))
00:12:06.701     14:22:45 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:12:06.701    14:22:45 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:12:06.701   14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:12:06.701   14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67584
00:12:06.701   14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:12:06.701   14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:12:06.701   14:22:45 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67584
00:12:06.701   14:22:45 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67584 ']'
00:12:06.701   14:22:45 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:06.701   14:22:45 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:06.701  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:06.701   14:22:45 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:06.701   14:22:45 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:06.701   14:22:45 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:06.959  [2024-11-20 14:22:45.784641] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:12:06.959  [2024-11-20 14:22:45.784795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67584 ]
00:12:07.216  [2024-11-20 14:22:45.957723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:07.216  [2024-11-20 14:22:46.062299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:07.216  [2024-11-20 14:22:46.062306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:08.150   14:22:46 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:08.150   14:22:46 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:12:08.150   14:22:46 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:12:08.409  Nvme0n1
00:12:08.409   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:12:08.409   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:12:08.666  request:
00:12:08.666  {
00:12:08.667    "bdev_name": "Nvme0n1",
00:12:08.667    "filename": "non_existing_file",
00:12:08.667    "method": "bdev_nvme_apply_firmware",
00:12:08.667    "req_id": 1
00:12:08.667  }
00:12:08.667  Got JSON-RPC error response
00:12:08.667  response:
00:12:08.667  {
00:12:08.667    "code": -32603,
00:12:08.667    "message": "open file failed."
00:12:08.667  }
00:12:08.667   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:12:08.667   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:12:08.667   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:12:08.925   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:12:08.925   14:22:47 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67584
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67584 ']'
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67584
00:12:08.925    14:22:47 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:08.925    14:22:47 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67584
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:08.925  killing process with pid 67584
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67584'
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67584
00:12:08.925   14:22:47 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67584
00:12:11.456  
00:12:11.456  real	0m4.448s
00:12:11.456  user	0m8.690s
00:12:11.456  sys	0m0.599s
00:12:11.456   14:22:49 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:11.456   14:22:49 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:12:11.456  ************************************
00:12:11.456  END TEST nvme_rpc
00:12:11.456  ************************************
00:12:11.457   14:22:49  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:12:11.457   14:22:49  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:11.457   14:22:49  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:11.457   14:22:49  -- common/autotest_common.sh@10 -- # set +x
00:12:11.457  ************************************
00:12:11.457  START TEST nvme_rpc_timeouts
00:12:11.457  ************************************
00:12:11.457   14:22:49 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:12:11.457  * Looking for test storage...
00:12:11.457  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:11.457    14:22:49 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:11.457     14:22:49 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version
00:12:11.457     14:22:49 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:11.457    14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:11.457     14:22:50 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:11.457    14:22:50 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:12:11.457    14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:11.457    14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:11.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.457  		--rc genhtml_branch_coverage=1
00:12:11.457  		--rc genhtml_function_coverage=1
00:12:11.457  		--rc genhtml_legend=1
00:12:11.457  		--rc geninfo_all_blocks=1
00:12:11.457  		--rc geninfo_unexecuted_blocks=1
00:12:11.457  		
00:12:11.457  		'
00:12:11.457    14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:11.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.457  		--rc genhtml_branch_coverage=1
00:12:11.457  		--rc genhtml_function_coverage=1
00:12:11.457  		--rc genhtml_legend=1
00:12:11.457  		--rc geninfo_all_blocks=1
00:12:11.457  		--rc geninfo_unexecuted_blocks=1
00:12:11.457  		
00:12:11.457  		'
00:12:11.457    14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:11.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.457  		--rc genhtml_branch_coverage=1
00:12:11.457  		--rc genhtml_function_coverage=1
00:12:11.457  		--rc genhtml_legend=1
00:12:11.457  		--rc geninfo_all_blocks=1
00:12:11.457  		--rc geninfo_unexecuted_blocks=1
00:12:11.457  		
00:12:11.457  		'
00:12:11.457    14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:11.457  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.457  		--rc genhtml_branch_coverage=1
00:12:11.457  		--rc genhtml_function_coverage=1
00:12:11.457  		--rc genhtml_legend=1
00:12:11.457  		--rc geninfo_all_blocks=1
00:12:11.457  		--rc geninfo_unexecuted_blocks=1
00:12:11.457  		
00:12:11.457  		'
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67660
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67660
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67692
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:12:11.457   14:22:50 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67692
00:12:11.457   14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67692 ']'
00:12:11.457   14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:11.457   14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:11.457  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:11.457   14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:11.457   14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:11.457   14:22:50 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:12:11.457  [2024-11-20 14:22:50.205884] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:12:11.457  [2024-11-20 14:22:50.206050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67692 ]
00:12:11.457  [2024-11-20 14:22:50.379767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:11.715  [2024-11-20 14:22:50.486555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:11.715  [2024-11-20 14:22:50.486556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:12.649   14:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:12.649   14:22:51 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:12:12.649  Checking default timeout settings:
00:12:12.649   14:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:12:12.649   14:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:12:12.907  Making settings changes with rpc:
00:12:12.907   14:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:12:12.907   14:22:51 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:12:13.165  Check default vs. modified settings:
00:12:13.165   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:12:13.165   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67660
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67660
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:12:13.733  Setting action_on_timeout is changed as expected.
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67660
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67660
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:12:13.733  Setting timeout_us is changed as expected.
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67660
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67660
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:12:13.733  Setting timeout_admin_us is changed as expected.
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67660 /tmp/settings_modified_67660
00:12:13.733   14:22:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67692
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67692 ']'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67692
00:12:13.733    14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:13.733    14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67692
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:13.733  killing process with pid 67692
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67692'
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67692
00:12:13.733   14:22:52 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67692
00:12:16.278  RPC TIMEOUT SETTING TEST PASSED.
00:12:16.278   14:22:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:12:16.278  ************************************
00:12:16.278  END TEST nvme_rpc_timeouts
00:12:16.278  ************************************
00:12:16.278  
00:12:16.278  real	0m4.809s
00:12:16.278  user	0m9.500s
00:12:16.278  sys	0m0.615s
00:12:16.278   14:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:16.278   14:22:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:12:16.278    14:22:54  -- spdk/autotest.sh@239 -- # uname -s
00:12:16.278   14:22:54  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:12:16.278   14:22:54  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:12:16.278   14:22:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:16.278   14:22:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:16.278   14:22:54  -- common/autotest_common.sh@10 -- # set +x
00:12:16.278  ************************************
00:12:16.278  START TEST sw_hotplug
00:12:16.278  ************************************
00:12:16.278   14:22:54 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:12:16.278  * Looking for test storage...
00:12:16.278  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:16.278     14:22:54 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version
00:12:16.278     14:22:54 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:16.278     14:22:54 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:16.278    14:22:54 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:16.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.278  		--rc genhtml_branch_coverage=1
00:12:16.278  		--rc genhtml_function_coverage=1
00:12:16.278  		--rc genhtml_legend=1
00:12:16.278  		--rc geninfo_all_blocks=1
00:12:16.278  		--rc geninfo_unexecuted_blocks=1
00:12:16.278  		
00:12:16.278  		'
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:16.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.278  		--rc genhtml_branch_coverage=1
00:12:16.278  		--rc genhtml_function_coverage=1
00:12:16.278  		--rc genhtml_legend=1
00:12:16.278  		--rc geninfo_all_blocks=1
00:12:16.278  		--rc geninfo_unexecuted_blocks=1
00:12:16.278  		
00:12:16.278  		'
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:16.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.278  		--rc genhtml_branch_coverage=1
00:12:16.278  		--rc genhtml_function_coverage=1
00:12:16.278  		--rc genhtml_legend=1
00:12:16.278  		--rc geninfo_all_blocks=1
00:12:16.278  		--rc geninfo_unexecuted_blocks=1
00:12:16.278  		
00:12:16.278  		'
00:12:16.278    14:22:54 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:16.278  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:16.278  		--rc genhtml_branch_coverage=1
00:12:16.278  		--rc genhtml_function_coverage=1
00:12:16.278  		--rc genhtml_legend=1
00:12:16.278  		--rc geninfo_all_blocks=1
00:12:16.278  		--rc geninfo_unexecuted_blocks=1
00:12:16.278  		
00:12:16.278  		'
00:12:16.278   14:22:54 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:16.278  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:16.537  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:16.537  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:16.537  0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:16.537  0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:16.537   14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:12:16.537   14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:12:16.537   14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:12:16.537    14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@233 -- # local class
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:12:16.537       14:22:55 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:12:16.537       14:22:55 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:12:16.537       14:22:55 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:12:16.537      14:22:55 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:11.0  ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:12.0  ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@18 -- # local i
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:13.0  ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]]
00:12:16.537     14:22:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:16.537    14:22:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]]
00:12:16.538     14:22:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]]
00:12:16.538     14:22:55 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@328 -- # (( 4 ))
00:12:16.538    14:22:55 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0
00:12:16.538   14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2
00:12:16.538   14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:12:16.538   14:22:55 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:12:17.104  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:17.104  Waiting for block devices as requested
00:12:17.104  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:12:17.104  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:12:17.362  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:12:17.362  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:12:22.626  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:12:22.626   14:23:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0'
00:12:22.626   14:23:01 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:22.886  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:12:22.886  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:12:22.886  0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0
00:12:23.144  0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0
00:12:23.402  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:12:23.661  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:12:23.661   14:23:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68560
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:12:23.661   14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:12:23.662    14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:12:23.662    14:23:02 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:12:23.662    14:23:02 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:12:23.662    14:23:02 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:12:23.662    14:23:02 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:12:23.662     14:23:02 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:12:23.662     14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:12:23.662     14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:12:23.662     14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:12:23.662     14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:12:23.662     14:23:02 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:12:23.921  Initializing NVMe Controllers
00:12:23.921  Attaching to 0000:00:10.0
00:12:23.921  Attaching to 0000:00:11.0
00:12:23.921  Attached to 0000:00:10.0
00:12:23.921  Attached to 0000:00:11.0
00:12:23.921  Initialization complete. Starting I/O...
00:12:23.921  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:12:23.921  QEMU NVMe Ctrl       (12341               ):          0 I/Os completed (+0)
00:12:23.921  
00:12:24.857  QEMU NVMe Ctrl       (12340               ):       1607 I/Os completed (+1607)
00:12:24.857  QEMU NVMe Ctrl       (12341               ):       1792 I/Os completed (+1792)
00:12:24.857  
00:12:26.231  QEMU NVMe Ctrl       (12340               ):       2930 I/Os completed (+1323)
00:12:26.231  QEMU NVMe Ctrl       (12341               ):       3330 I/Os completed (+1538)
00:12:26.231  
00:12:27.166  QEMU NVMe Ctrl       (12340               ):       4599 I/Os completed (+1669)
00:12:27.166  QEMU NVMe Ctrl       (12341               ):       5100 I/Os completed (+1770)
00:12:27.166  
00:12:28.099  QEMU NVMe Ctrl       (12340               ):       6215 I/Os completed (+1616)
00:12:28.099  QEMU NVMe Ctrl       (12341               ):       6962 I/Os completed (+1862)
00:12:28.099  
00:12:29.034  QEMU NVMe Ctrl       (12340               ):       7767 I/Os completed (+1552)
00:12:29.034  QEMU NVMe Ctrl       (12341               ):       8825 I/Os completed (+1863)
00:12:29.034  
00:12:29.601     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:12:29.601     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:29.601     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:29.601  [2024-11-20 14:23:08.579322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:12:29.601  Controller removed: QEMU NVMe Ctrl       (12340               )
00:12:29.601  [2024-11-20 14:23:08.581669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.601  [2024-11-20 14:23:08.581753] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.601  [2024-11-20 14:23:08.581789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.601  [2024-11-20 14:23:08.581821] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.859  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:12:29.859  [2024-11-20 14:23:08.585549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.585653] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.585690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.585718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:29.860  [2024-11-20 14:23:08.607468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:12:29.860  Controller removed: QEMU NVMe Ctrl       (12341               )
00:12:29.860  [2024-11-20 14:23:08.609681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.609756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.609803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.609841] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:12:29.860  [2024-11-20 14:23:08.613161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.613230] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.613264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860  [2024-11-20 14:23:08.613292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:12:29.860  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:12:29.860  EAL: Scan for (pci) bus failed.
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:12:29.860  
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:29.860     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:12:30.120  Attaching to 0000:00:10.0
00:12:30.120  Attached to 0000:00:10.0
00:12:30.120     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:12:30.120     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:30.120     14:23:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:12:30.121  Attaching to 0000:00:11.0
00:12:30.121  Attached to 0000:00:11.0
00:12:31.058  QEMU NVMe Ctrl       (12340               ):       2903 I/Os completed (+2903)
00:12:31.058  QEMU NVMe Ctrl       (12341               ):       3483 I/Os completed (+3483)
00:12:31.058  
00:12:31.993  QEMU NVMe Ctrl       (12340               ):       4472 I/Os completed (+1569)
00:12:31.993  QEMU NVMe Ctrl       (12341               ):       5227 I/Os completed (+1744)
00:12:31.993  
00:12:32.928  QEMU NVMe Ctrl       (12340               ):       6165 I/Os completed (+1693)
00:12:32.928  QEMU NVMe Ctrl       (12341               ):       7098 I/Os completed (+1871)
00:12:32.928  
00:12:33.861  QEMU NVMe Ctrl       (12340               ):       7778 I/Os completed (+1613)
00:12:33.861  QEMU NVMe Ctrl       (12341               ):       8839 I/Os completed (+1741)
00:12:33.861  
00:12:35.236  QEMU NVMe Ctrl       (12340               ):       9478 I/Os completed (+1700)
00:12:35.236  QEMU NVMe Ctrl       (12341               ):      10672 I/Os completed (+1833)
00:12:35.236  
00:12:36.172  QEMU NVMe Ctrl       (12340               ):      11097 I/Os completed (+1619)
00:12:36.172  QEMU NVMe Ctrl       (12341               ):      12512 I/Os completed (+1840)
00:12:36.172  
00:12:37.108  QEMU NVMe Ctrl       (12340               ):      12770 I/Os completed (+1673)
00:12:37.108  QEMU NVMe Ctrl       (12341               ):      14369 I/Os completed (+1857)
00:12:37.108  
00:12:38.043  QEMU NVMe Ctrl       (12340               ):      14402 I/Os completed (+1632)
00:12:38.043  QEMU NVMe Ctrl       (12341               ):      16214 I/Os completed (+1845)
00:12:38.043  
00:12:39.002  QEMU NVMe Ctrl       (12340               ):      15910 I/Os completed (+1508)
00:12:39.002  QEMU NVMe Ctrl       (12341               ):      17998 I/Os completed (+1784)
00:12:39.002  
00:12:39.937  QEMU NVMe Ctrl       (12340               ):      17649 I/Os completed (+1739)
00:12:39.937  QEMU NVMe Ctrl       (12341               ):      20443 I/Os completed (+2445)
00:12:39.937  
00:12:40.871  QEMU NVMe Ctrl       (12340               ):      19576 I/Os completed (+1927)
00:12:40.871  QEMU NVMe Ctrl       (12341               ):      23009 I/Os completed (+2566)
00:12:40.871  
00:12:42.247  QEMU NVMe Ctrl       (12340               ):      21186 I/Os completed (+1610)
00:12:42.247  QEMU NVMe Ctrl       (12341               ):      24688 I/Os completed (+1679)
00:12:42.247  
00:12:42.247     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:12:42.247     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:12:42.247     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:42.247     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:42.247  [2024-11-20 14:23:20.920458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:12:42.247  Controller removed: QEMU NVMe Ctrl       (12340               )
00:12:42.248  [2024-11-20 14:23:20.922322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.922391] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.922420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.922447] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:12:42.248  [2024-11-20 14:23:20.925267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.925330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.925355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.925380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:42.248     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:42.248  [2024-11-20 14:23:20.945899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:12:42.248  Controller removed: QEMU NVMe Ctrl       (12341               )
00:12:42.248  [2024-11-20 14:23:20.947606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.947664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.947697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.947721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:12:42.248  [2024-11-20 14:23:20.950170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.950221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.950248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248  [2024-11-20 14:23:20.950270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:42.248     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:12:42.248     14:23:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:12:42.248  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor
00:12:42.248  EAL: Scan for (pci) bus failed.
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:12:42.248  Attaching to 0000:00:10.0
00:12:42.248  Attached to 0000:00:10.0
00:12:42.248     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:12:42.506     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:42.506     14:23:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:12:42.506  Attaching to 0000:00:11.0
00:12:42.506  Attached to 0000:00:11.0
00:12:43.074  QEMU NVMe Ctrl       (12340               ):        970 I/Os completed (+970)
00:12:43.074  QEMU NVMe Ctrl       (12341               ):        938 I/Os completed (+938)
00:12:43.074  
00:12:44.011  QEMU NVMe Ctrl       (12340               ):       2761 I/Os completed (+1791)
00:12:44.011  QEMU NVMe Ctrl       (12341               ):       2891 I/Os completed (+1953)
00:12:44.011  
00:12:44.945  QEMU NVMe Ctrl       (12340               ):       4539 I/Os completed (+1778)
00:12:44.945  QEMU NVMe Ctrl       (12341               ):       4886 I/Os completed (+1995)
00:12:44.945  
00:12:45.879  QEMU NVMe Ctrl       (12340               ):       6297 I/Os completed (+1758)
00:12:45.879  QEMU NVMe Ctrl       (12341               ):       6786 I/Os completed (+1900)
00:12:45.879  
00:12:46.824  QEMU NVMe Ctrl       (12340               ):       8065 I/Os completed (+1768)
00:12:46.824  QEMU NVMe Ctrl       (12341               ):       8723 I/Os completed (+1937)
00:12:46.824  
00:12:48.211  QEMU NVMe Ctrl       (12340               ):       9627 I/Os completed (+1562)
00:12:48.211  QEMU NVMe Ctrl       (12341               ):      10579 I/Os completed (+1856)
00:12:48.211  
00:12:49.147  QEMU NVMe Ctrl       (12340               ):      11210 I/Os completed (+1583)
00:12:49.147  QEMU NVMe Ctrl       (12341               ):      12360 I/Os completed (+1781)
00:12:49.147  
00:12:50.083  QEMU NVMe Ctrl       (12340               ):      12712 I/Os completed (+1502)
00:12:50.083  QEMU NVMe Ctrl       (12341               ):      14152 I/Os completed (+1792)
00:12:50.083  
00:12:51.019  QEMU NVMe Ctrl       (12340               ):      14346 I/Os completed (+1634)
00:12:51.019  QEMU NVMe Ctrl       (12341               ):      15924 I/Os completed (+1772)
00:12:51.019  
00:12:51.956  QEMU NVMe Ctrl       (12340               ):      15971 I/Os completed (+1625)
00:12:51.956  QEMU NVMe Ctrl       (12341               ):      17644 I/Os completed (+1720)
00:12:51.956  
00:12:52.891  QEMU NVMe Ctrl       (12340               ):      17599 I/Os completed (+1628)
00:12:52.891  QEMU NVMe Ctrl       (12341               ):      19384 I/Os completed (+1740)
00:12:52.891  
00:12:53.828  QEMU NVMe Ctrl       (12340               ):      19271 I/Os completed (+1672)
00:12:53.828  QEMU NVMe Ctrl       (12341               ):      21458 I/Os completed (+2074)
00:12:53.828  
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:54.396  [2024-11-20 14:23:33.238717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:12:54.396  Controller removed: QEMU NVMe Ctrl       (12340               )
00:12:54.396  [2024-11-20 14:23:33.241000] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.241145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.241187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.241218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:12:54.396  [2024-11-20 14:23:33.244833] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.244947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.244984] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.245012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:12:54.396  [2024-11-20 14:23:33.264955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:12:54.396  Controller removed: QEMU NVMe Ctrl       (12341               )
00:12:54.396  [2024-11-20 14:23:33.267025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.267098] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.267135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.267163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:12:54.396  [2024-11-20 14:23:33.270146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.270212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.270246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396  [2024-11-20 14:23:33.270270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:54.396     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:12:54.655  Attaching to 0000:00:10.0
00:12:54.655  Attached to 0000:00:10.0
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:12:54.655     14:23:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:12:54.655  Attaching to 0000:00:11.0
00:12:54.655  Attached to 0000:00:11.0
00:12:54.655  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:12:54.655  unregister_dev: QEMU NVMe Ctrl       (12341               )
00:12:54.655  [2024-11-20 14:23:33.535550] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:13:06.858     14:23:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:13:06.858     14:23:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:06.858    14:23:45 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.94
00:13:06.858    14:23:45 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.94
00:13:06.858    14:23:45 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:13:06.858   14:23:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94
00:13:06.858   14:23:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2
00:13:06.858  remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 14:23:45 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68560
00:13:13.425  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68560) - No such process
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68560
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69098
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:13:13.425   14:23:51 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69098
00:13:13.425   14:23:51 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69098 ']'
00:13:13.425   14:23:51 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:13.425   14:23:51 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:13.425  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:13.425   14:23:51 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:13.425   14:23:51 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:13.425   14:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:13.425  [2024-11-20 14:23:51.648695] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:13:13.425  [2024-11-20 14:23:51.648871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69098 ]
00:13:13.425  [2024-11-20 14:23:51.836004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:13.425  [2024-11-20 14:23:51.962367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:13.993   14:23:52 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:13.993   14:23:52 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:13:13.993   14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:13:13.993   14:23:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.993   14:23:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:13.993   14:23:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.993   14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:13:13.993   14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:13:13.993    14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:13:13.993    14:23:52 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:13:13.993    14:23:52 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:13:13.993    14:23:52 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:13:13.993    14:23:52 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:13:13.993     14:23:52 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:13:13.993     14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:13:13.993     14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:13:13.993     14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:13:13.993     14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:13:13.993     14:23:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:20.554      14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:20.554      14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:20.554      14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:20.554       14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:20.554       14:23:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.554       14:23:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:20.554  [2024-11-20 14:23:58.844257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:20.554  [2024-11-20 14:23:58.846989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:58.847062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:58.847089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554  [2024-11-20 14:23:58.847120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:58.847136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:58.847152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554  [2024-11-20 14:23:58.847168] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:58.847183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:58.847196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554  [2024-11-20 14:23:58.847217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:58.847231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:58.847246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554       14:23:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:13:20.554     14:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:13:20.554  [2024-11-20 14:23:59.244279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:20.554  [2024-11-20 14:23:59.247233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:59.247287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:59.247313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554  [2024-11-20 14:23:59.247340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:59.247358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:59.247373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554  [2024-11-20 14:23:59.247390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:59.247403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:59.247418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554  [2024-11-20 14:23:59.247432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:20.554  [2024-11-20 14:23:59.247448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:20.554  [2024-11-20 14:23:59.247461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:20.554      14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:20.554      14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:20.554      14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:20.554       14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:20.554       14:23:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:20.554       14:23:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:20.554       14:23:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:20.554     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:20.811     14:23:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:13:33.059      14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:13:33.059      14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:33.059      14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:33.059       14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:33.059       14:24:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:33.059       14:24:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:33.059       14:24:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:33.059  [2024-11-20 14:24:11.844516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:33.059  [2024-11-20 14:24:11.847772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.059  [2024-11-20 14:24:11.847833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.059  [2024-11-20 14:24:11.847856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.059  [2024-11-20 14:24:11.847887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.059  [2024-11-20 14:24:11.847904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.059  [2024-11-20 14:24:11.847920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.059  [2024-11-20 14:24:11.847935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.059  [2024-11-20 14:24:11.847951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.059  [2024-11-20 14:24:11.847965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.059  [2024-11-20 14:24:11.847981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.059  [2024-11-20 14:24:11.847995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.059  [2024-11-20 14:24:11.848010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:33.059      14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:33.059      14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:33.059      14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:33.059       14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:33.059       14:24:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:33.059       14:24:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:33.059       14:24:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:13:33.059     14:24:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:13:33.317  [2024-11-20 14:24:12.244507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:33.317  [2024-11-20 14:24:12.248871] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.317  [2024-11-20 14:24:12.248951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.317  [2024-11-20 14:24:12.248992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.317  [2024-11-20 14:24:12.249041] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.317  [2024-11-20 14:24:12.249072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.317  [2024-11-20 14:24:12.249095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.317  [2024-11-20 14:24:12.249122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.317  [2024-11-20 14:24:12.249144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.317  [2024-11-20 14:24:12.249168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.317  [2024-11-20 14:24:12.249191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:33.317  [2024-11-20 14:24:12.249215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:33.317  [2024-11-20 14:24:12.249237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:33.575     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:13:33.575     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:33.575      14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:33.575      14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:33.575       14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:33.575       14:24:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:33.575       14:24:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:33.575      14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:33.575       14:24:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:33.575     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:13:33.575     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:33.831     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:34.089     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:34.089     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:34.089     14:24:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:13:46.296     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:13:46.296     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:13:46.296      14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:13:46.297      14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:46.297       14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:46.297       14:24:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.297       14:24:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:46.297      14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:46.297       14:24:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:46.297      14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:46.297  [2024-11-20 14:24:24.944933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:13:46.297      14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:46.297      14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:46.297       14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:46.297       14:24:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.297  [2024-11-20 14:24:24.947940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.297  [2024-11-20 14:24:24.948005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.297  [2024-11-20 14:24:24.948027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.297  [2024-11-20 14:24:24.948057] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.297  [2024-11-20 14:24:24.948073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.297  [2024-11-20 14:24:24.948092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.297  [2024-11-20 14:24:24.948107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.297  [2024-11-20 14:24:24.948122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.297  [2024-11-20 14:24:24.948136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.297  [2024-11-20 14:24:24.948152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.297  [2024-11-20 14:24:24.948167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.297  [2024-11-20 14:24:24.948183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.297       14:24:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:46.297       14:24:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:13:46.297     14:24:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:13:46.556  [2024-11-20 14:24:25.344966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:13:46.556  [2024-11-20 14:24:25.347937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.556  [2024-11-20 14:24:25.347991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.556  [2024-11-20 14:24:25.348017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.556  [2024-11-20 14:24:25.348046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.556  [2024-11-20 14:24:25.348064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.556  [2024-11-20 14:24:25.348078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.556  [2024-11-20 14:24:25.348096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.556  [2024-11-20 14:24:25.348109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.556  [2024-11-20 14:24:25.348127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.556  [2024-11-20 14:24:25.348142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:13:46.556  [2024-11-20 14:24:25.348157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:13:46.556  [2024-11-20 14:24:25.348171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:13:46.556     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:13:46.556     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:13:46.556      14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:13:46.556      14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:46.556      14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:46.556       14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:46.556       14:24:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:46.556       14:24:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:46.556       14:24:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:13:46.815     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:13:47.072     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:13:47.073     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:13:47.073     14:24:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:13:59.448     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:13:59.448     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:13:59.448      14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:13:59.448      14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:13:59.448      14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:13:59.448       14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:13:59.448       14:24:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.448       14:24:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:59.448       14:24:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.448     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:13:59.448     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:13:59.448    14:24:37 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18
00:13:59.448    14:24:37 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18
00:13:59.448    14:24:37 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:13:59.448   14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18
00:13:59.448   14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2
00:13:59.448  remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:13:59.448   14:24:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.448   14:24:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:59.448   14:24:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.448   14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:13:59.448   14:24:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:59.448   14:24:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:13:59.448   14:24:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:59.448   14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:13:59.448   14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:13:59.448    14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:13:59.449    14:24:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:13:59.449    14:24:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:13:59.449    14:24:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:13:59.449    14:24:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:13:59.449     14:24:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:13:59.449     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:13:59.449     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:13:59.449     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:13:59.449     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:13:59.449     14:24:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:14:06.025     14:24:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:06.025     14:24:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:06.025     14:24:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:06.025      14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:06.025       14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:06.025      14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:06.025      14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:06.025       14:24:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.025       14:24:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:06.025  [2024-11-20 14:24:44.056304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:06.025  [2024-11-20 14:24:44.060112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.060211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.060249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.060337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.060380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.060421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.060451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.060504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.060533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.060593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.060623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.060675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025       14:24:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:06.025      14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:06.025      14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:06.025       14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:06.025      14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:06.025       14:24:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.025       14:24:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:06.025       14:24:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:14:06.025     14:24:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:06.025  [2024-11-20 14:24:44.756283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:06.025  [2024-11-20 14:24:44.758212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.758271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.758302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.758333] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.758355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.758371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.758393] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.758407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.758426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.758441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:06.025  [2024-11-20 14:24:44.758460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:06.025  [2024-11-20 14:24:44.758475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:06.025  [2024-11-20 14:24:44.758500] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed
00:14:06.025  [2024-11-20 14:24:44.758517] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed
00:14:06.025  [2024-11-20 14:24:44.758545] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed
00:14:06.025  [2024-11-20 14:24:44.758560] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed
00:14:06.284     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:14:06.284     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:06.284      14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:06.284      14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:06.284       14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:06.284       14:24:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:06.284       14:24:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:06.284      14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:06.284       14:24:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:06.284     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:06.284     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:06.544     14:24:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:18.745      14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:18.745      14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:18.745      14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:18.745       14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:18.745       14:24:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:18.745       14:24:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:18.745       14:24:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:18.745      14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:18.745       14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:18.745      14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:18.745      14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:18.745       14:24:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:18.745       14:24:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:18.745       14:24:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:18.745  [2024-11-20 14:24:57.656479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 ))
00:14:18.745  [2024-11-20 14:24:57.658429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:18.745     14:24:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:18.745  [2024-11-20 14:24:57.658485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:18.745  [2024-11-20 14:24:57.658507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:18.745  [2024-11-20 14:24:57.658539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:18.745  [2024-11-20 14:24:57.658554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:18.745  [2024-11-20 14:24:57.658586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:18.745  [2024-11-20 14:24:57.658606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:18.745  [2024-11-20 14:24:57.658623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:18.745  [2024-11-20 14:24:57.658637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:18.745  [2024-11-20 14:24:57.658654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:18.745  [2024-11-20 14:24:57.658667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:18.745  [2024-11-20 14:24:57.658683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:19.314  [2024-11-20 14:24:58.056485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:19.314  [2024-11-20 14:24:58.058486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:19.314  [2024-11-20 14:24:58.058549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:19.314  [2024-11-20 14:24:58.058589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:19.314  [2024-11-20 14:24:58.058619] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:19.314  [2024-11-20 14:24:58.058638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:19.314  [2024-11-20 14:24:58.058653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:19.314  [2024-11-20 14:24:58.058671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:19.314  [2024-11-20 14:24:58.058685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:19.314  [2024-11-20 14:24:58.058701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:19.314  [2024-11-20 14:24:58.058716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:19.314  [2024-11-20 14:24:58.058731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:19.314  [2024-11-20 14:24:58.058744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:19.314     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0
00:14:19.314     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:19.314      14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:19.314      14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:19.314       14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:19.314      14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:19.314       14:24:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:19.314       14:24:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:19.314       14:24:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:19.314     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:19.314     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:19.585     14:24:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:31.892     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:31.892     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:31.892      14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:31.892      14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:31.892      14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:31.892       14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:31.892       14:25:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:31.892       14:25:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:31.893       14:25:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:31.893  [2024-11-20 14:25:10.656701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:14:31.893      14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:31.893  [2024-11-20 14:25:10.658646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:31.893  [2024-11-20 14:25:10.658706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:31.893  [2024-11-20 14:25:10.658728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:31.893  [2024-11-20 14:25:10.658762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:31.893  [2024-11-20 14:25:10.658778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:31.893  [2024-11-20 14:25:10.658794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:31.893  [2024-11-20 14:25:10.658810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:31.893  [2024-11-20 14:25:10.658826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:31.893  [2024-11-20 14:25:10.658839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:31.893  [2024-11-20 14:25:10.658856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:31.893  [2024-11-20 14:25:10.658870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:31.893  [2024-11-20 14:25:10.658886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:31.893      14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:31.893       14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:31.893      14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:31.893       14:25:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:31.893       14:25:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:31.893       14:25:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:14:31.893     14:25:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:14:32.151  [2024-11-20 14:25:11.056710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state.
00:14:32.151  [2024-11-20 14:25:11.058831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:32.151  [2024-11-20 14:25:11.058884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:32.151  [2024-11-20 14:25:11.058910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:32.151  [2024-11-20 14:25:11.058938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:32.151  [2024-11-20 14:25:11.058956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:32.151  [2024-11-20 14:25:11.058970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:32.151  [2024-11-20 14:25:11.058991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:32.151  [2024-11-20 14:25:11.059005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:32.151  [2024-11-20 14:25:11.059020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:32.151  [2024-11-20 14:25:11.059035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:14:32.151  [2024-11-20 14:25:11.059050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:14:32.151  [2024-11-20 14:25:11.059064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:14:32.410      14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:14:32.410      14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:32.410      14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:32.410       14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:32.410       14:25:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:32.410       14:25:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:32.410       14:25:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:32.410     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:14:32.668     14:25:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12
00:14:44.869     14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:14:44.869     14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:14:44.869      14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:14:44.869      14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:14:44.869      14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:14:44.869       14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:14:44.869       14:25:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:44.869       14:25:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:44.869       14:25:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:44.869     14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]]
00:14:44.869     14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:14:44.869    14:25:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.64
00:14:44.869    14:25:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.64
00:14:44.869    14:25:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:14:44.869   14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.64
00:14:44.869   14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.64 2
00:14:44.869  remove_attach_helper took 45.64s to complete (handling 2 nvme drive(s)) 14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:14:44.869   14:25:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69098
00:14:44.869   14:25:23 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69098 ']'
00:14:44.869   14:25:23 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69098
00:14:44.869    14:25:23 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:14:44.870   14:25:23 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:44.870    14:25:23 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69098
00:14:44.870   14:25:23 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:44.870   14:25:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:44.870  killing process with pid 69098
00:14:44.870   14:25:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69098'
00:14:44.870   14:25:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69098
00:14:44.870   14:25:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69098
00:14:47.399   14:25:25 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:47.399  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:14:47.658  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:14:47.658  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:14:47.658  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:14:47.916  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:14:47.916  
00:14:47.916  real	2m31.940s
00:14:47.916  user	1m51.769s
00:14:47.916  sys	0m19.962s
00:14:47.916   14:25:26 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:47.916   14:25:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:47.916  ************************************
00:14:47.916  END TEST sw_hotplug
00:14:47.916  ************************************
00:14:47.916   14:25:26  -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]]
00:14:47.916   14:25:26  -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh
00:14:47.916   14:25:26  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:47.916   14:25:26  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:47.916   14:25:26  -- common/autotest_common.sh@10 -- # set +x
00:14:47.916  ************************************
00:14:47.916  START TEST nvme_xnvme
00:14:47.916  ************************************
00:14:47.916   14:25:26 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh
00:14:47.916  * Looking for test storage...
00:14:47.916  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:14:47.916     14:25:26 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:47.916      14:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version
00:14:47.916      14:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:48.177     14:25:26 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@345 -- # : 1
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=1
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 1
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@353 -- # local d=2
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:48.177      14:25:26 nvme_xnvme -- scripts/common.sh@355 -- # echo 2
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:48.177     14:25:26 nvme_xnvme -- scripts/common.sh@368 -- # return 0
00:14:48.177     14:25:26 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:48.177     14:25:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:48.177  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.177  		--rc genhtml_branch_coverage=1
00:14:48.177  		--rc genhtml_function_coverage=1
00:14:48.177  		--rc genhtml_legend=1
00:14:48.177  		--rc geninfo_all_blocks=1
00:14:48.177  		--rc geninfo_unexecuted_blocks=1
00:14:48.177  		
00:14:48.177  		'
00:14:48.177     14:25:26 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:48.177  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.177  		--rc genhtml_branch_coverage=1
00:14:48.177  		--rc genhtml_function_coverage=1
00:14:48.177  		--rc genhtml_legend=1
00:14:48.177  		--rc geninfo_all_blocks=1
00:14:48.177  		--rc geninfo_unexecuted_blocks=1
00:14:48.177  		
00:14:48.177  		'
00:14:48.177     14:25:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:48.177  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.177  		--rc genhtml_branch_coverage=1
00:14:48.177  		--rc genhtml_function_coverage=1
00:14:48.177  		--rc genhtml_legend=1
00:14:48.177  		--rc geninfo_all_blocks=1
00:14:48.177  		--rc geninfo_unexecuted_blocks=1
00:14:48.177  		
00:14:48.177  		'
00:14:48.177     14:25:26 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:48.177  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.177  		--rc genhtml_branch_coverage=1
00:14:48.177  		--rc genhtml_function_coverage=1
00:14:48.177  		--rc genhtml_legend=1
00:14:48.177  		--rc geninfo_all_blocks=1
00:14:48.177  		--rc geninfo_unexecuted_blocks=1
00:14:48.177  		
00:14:48.177  		'
00:14:48.177    14:25:26 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh
00:14:48.177     14:25:26 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:14:48.177      14:25:26 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:14:48.177       14:25:26 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:14:48.178       14:25:26 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n
00:14:48.178      14:25:26 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:14:48.178         14:25:26 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:14:48.178        14:25:26 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:14:48.178  #define SPDK_CONFIG_H
00:14:48.178  #define SPDK_CONFIG_AIO_FSDEV 1
00:14:48.178  #define SPDK_CONFIG_APPS 1
00:14:48.178  #define SPDK_CONFIG_ARCH native
00:14:48.178  #define SPDK_CONFIG_ASAN 1
00:14:48.178  #undef SPDK_CONFIG_AVAHI
00:14:48.178  #undef SPDK_CONFIG_CET
00:14:48.178  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:14:48.178  #define SPDK_CONFIG_COVERAGE 1
00:14:48.178  #define SPDK_CONFIG_CROSS_PREFIX 
00:14:48.178  #undef SPDK_CONFIG_CRYPTO
00:14:48.178  #undef SPDK_CONFIG_CRYPTO_MLX5
00:14:48.178  #undef SPDK_CONFIG_CUSTOMOCF
00:14:48.178  #undef SPDK_CONFIG_DAOS
00:14:48.178  #define SPDK_CONFIG_DAOS_DIR 
00:14:48.178  #define SPDK_CONFIG_DEBUG 1
00:14:48.178  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:14:48.178  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:14:48.178  #define SPDK_CONFIG_DPDK_INC_DIR 
00:14:48.178  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:14:48.178  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:14:48.178  #undef SPDK_CONFIG_DPDK_UADK
00:14:48.178  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:14:48.178  #define SPDK_CONFIG_EXAMPLES 1
00:14:48.178  #undef SPDK_CONFIG_FC
00:14:48.178  #define SPDK_CONFIG_FC_PATH 
00:14:48.178  #define SPDK_CONFIG_FIO_PLUGIN 1
00:14:48.178  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:14:48.178  #define SPDK_CONFIG_FSDEV 1
00:14:48.178  #undef SPDK_CONFIG_FUSE
00:14:48.178  #undef SPDK_CONFIG_FUZZER
00:14:48.178  #define SPDK_CONFIG_FUZZER_LIB 
00:14:48.178  #undef SPDK_CONFIG_GOLANG
00:14:48.178  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:14:48.178  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:14:48.178  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:14:48.178  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:14:48.178  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:14:48.178  #undef SPDK_CONFIG_HAVE_LIBBSD
00:14:48.178  #undef SPDK_CONFIG_HAVE_LZ4
00:14:48.178  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:14:48.178  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:14:48.178  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:14:48.178  #define SPDK_CONFIG_IDXD 1
00:14:48.178  #define SPDK_CONFIG_IDXD_KERNEL 1
00:14:48.178  #undef SPDK_CONFIG_IPSEC_MB
00:14:48.178  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:14:48.178  #define SPDK_CONFIG_ISAL 1
00:14:48.178  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:14:48.178  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:14:48.178  #define SPDK_CONFIG_LIBDIR 
00:14:48.178  #undef SPDK_CONFIG_LTO
00:14:48.178  #define SPDK_CONFIG_MAX_LCORES 128
00:14:48.178  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:14:48.178  #define SPDK_CONFIG_NVME_CUSE 1
00:14:48.178  #undef SPDK_CONFIG_OCF
00:14:48.178  #define SPDK_CONFIG_OCF_PATH 
00:14:48.178  #define SPDK_CONFIG_OPENSSL_PATH 
00:14:48.178  #undef SPDK_CONFIG_PGO_CAPTURE
00:14:48.178  #define SPDK_CONFIG_PGO_DIR 
00:14:48.178  #undef SPDK_CONFIG_PGO_USE
00:14:48.178  #define SPDK_CONFIG_PREFIX /usr/local
00:14:48.178  #undef SPDK_CONFIG_RAID5F
00:14:48.178  #undef SPDK_CONFIG_RBD
00:14:48.178  #define SPDK_CONFIG_RDMA 1
00:14:48.178  #define SPDK_CONFIG_RDMA_PROV verbs
00:14:48.178  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:14:48.178  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:14:48.178  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:14:48.178  #define SPDK_CONFIG_SHARED 1
00:14:48.178  #undef SPDK_CONFIG_SMA
00:14:48.178  #define SPDK_CONFIG_TESTS 1
00:14:48.178  #undef SPDK_CONFIG_TSAN
00:14:48.178  #define SPDK_CONFIG_UBLK 1
00:14:48.178  #define SPDK_CONFIG_UBSAN 1
00:14:48.178  #undef SPDK_CONFIG_UNIT_TESTS
00:14:48.178  #undef SPDK_CONFIG_URING
00:14:48.178  #define SPDK_CONFIG_URING_PATH 
00:14:48.178  #undef SPDK_CONFIG_URING_ZNS
00:14:48.178  #undef SPDK_CONFIG_USDT
00:14:48.178  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:14:48.178  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:14:48.178  #undef SPDK_CONFIG_VFIO_USER
00:14:48.178  #define SPDK_CONFIG_VFIO_USER_DIR 
00:14:48.178  #define SPDK_CONFIG_VHOST 1
00:14:48.178  #define SPDK_CONFIG_VIRTIO 1
00:14:48.178  #undef SPDK_CONFIG_VTUNE
00:14:48.178  #define SPDK_CONFIG_VTUNE_DIR 
00:14:48.178  #define SPDK_CONFIG_WERROR 1
00:14:48.178  #define SPDK_CONFIG_WPDK_DIR 
00:14:48.178  #define SPDK_CONFIG_XNVME 1
00:14:48.178  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:14:48.178       14:25:26 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:14:48.178      14:25:26 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:48.178       14:25:26 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob
00:14:48.178       14:25:26 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:48.178       14:25:26 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:48.178       14:25:26 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:48.178        14:25:26 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.178        14:25:26 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.178        14:25:26 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.178        14:25:26 nvme_xnvme -- paths/export.sh@5 -- # export PATH
00:14:48.178        14:25:26 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.179      14:25:26 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:14:48.179         14:25:26 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:14:48.179        14:25:26 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:14:48.179       14:25:26 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:14:48.179        14:25:26 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:14:48.179        14:25:27 nvme_xnvme -- pm/common@68 -- # uname -s
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@76 -- # SUDO[0]=
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E'
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]]
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:14:48.179       14:25:27 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@70 -- # :
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@126 -- # :
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@140 -- # :
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@142 -- # : true
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@154 -- # :
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0
00:14:48.179      14:25:27 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@169 -- # :
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@206 -- # cat
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV=
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt=
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind=
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind=
00:14:48.180       14:25:27 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE=
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70451 ]]
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70451
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:14:48.180       14:25:27 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.77q3vV
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.77q3vV/tests/xnvme /tmp/spdk.77q3vV
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.180       14:25:27 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T
00:14:48.180       14:25:27 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975404544
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592350720
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344
00:14:48.180      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975404544
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592350720
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94558990336
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5143789568
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:14:48.181  * Looking for test storage...
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:14:48.181       14:25:27 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:14:48.181       14:25:27 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975404544
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]]
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]]
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]]
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:14:48.181  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1685 -- # true
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@27 -- # exec
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@29 -- # exec
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x
00:14:48.181      14:25:27 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:48.181       14:25:27 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version
00:14:48.181       14:25:27 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:48.440      14:25:27 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@345 -- # : 1
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=1
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 1
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=2
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:48.440       14:25:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 2
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@368 -- # return 0
00:14:48.440      14:25:27 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:48.440      14:25:27 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:48.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.440  		--rc genhtml_branch_coverage=1
00:14:48.440  		--rc genhtml_function_coverage=1
00:14:48.440  		--rc genhtml_legend=1
00:14:48.440  		--rc geninfo_all_blocks=1
00:14:48.440  		--rc geninfo_unexecuted_blocks=1
00:14:48.440  		
00:14:48.440  		'
00:14:48.440      14:25:27 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:48.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.440  		--rc genhtml_branch_coverage=1
00:14:48.440  		--rc genhtml_function_coverage=1
00:14:48.440  		--rc genhtml_legend=1
00:14:48.440  		--rc geninfo_all_blocks=1
00:14:48.440  		--rc geninfo_unexecuted_blocks=1
00:14:48.440  		
00:14:48.440  		'
00:14:48.440      14:25:27 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:48.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.440  		--rc genhtml_branch_coverage=1
00:14:48.440  		--rc genhtml_function_coverage=1
00:14:48.440  		--rc genhtml_legend=1
00:14:48.440  		--rc geninfo_all_blocks=1
00:14:48.440  		--rc geninfo_unexecuted_blocks=1
00:14:48.440  		
00:14:48.440  		'
00:14:48.440      14:25:27 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:48.440  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:48.440  		--rc genhtml_branch_coverage=1
00:14:48.440  		--rc genhtml_function_coverage=1
00:14:48.440  		--rc genhtml_legend=1
00:14:48.440  		--rc geninfo_all_blocks=1
00:14:48.440  		--rc geninfo_unexecuted_blocks=1
00:14:48.440  		
00:14:48.440  		'
00:14:48.440     14:25:27 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:48.440      14:25:27 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:48.441       14:25:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.441       14:25:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.441       14:25:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.441       14:25:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH
00:14:48.441       14:25:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false')
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme
00:14:48.441    14:25:27 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:14:48.699  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:14:48.958  Waiting for block devices as requested
00:14:48.958  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:14:48.958  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:14:48.958  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:14:49.260  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:14:54.529  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:14:54.529    14:25:33 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme
00:14:54.529     14:25:33 nvme_xnvme -- xnvme/common.sh@74 -- # nproc
00:14:54.529    14:25:33 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*)
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1
00:14:54.788    14:25:33 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:14:54.788    14:25:33 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:14:54.788  No valid GPT data, bailing
00:14:54.788     14:25:33 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:14:54.788    14:25:33 nvme_xnvme -- scripts/common.sh@394 -- # pt=
00:14:54.788    14:25:33 nvme_xnvme -- scripts/common.sh@395 -- # return 1
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1
00:14:54.788    14:25:33 nvme_xnvme -- xnvme/common.sh@83 -- # return 0
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:14:54.788   14:25:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:14:54.788   14:25:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:54.788   14:25:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:54.788   14:25:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:14:54.788  ************************************
00:14:54.788  START TEST xnvme_rpc
00:14:54.788  ************************************
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70838
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70838
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70838 ']'
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:54.788  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:54.788   14:25:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:55.046  [2024-11-20 14:25:33.888645] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:14:55.046  [2024-11-20 14:25:33.888822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ]
00:14:55.304  [2024-11-20 14:25:34.075210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:55.304  [2024-11-20 14:25:34.204701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio ''
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:56.240  xnvme_bdev
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.240   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.240    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]]
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70838
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70838 ']'
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70838
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:56.499    14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70838
00:14:56.499  killing process with pid 70838
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70838'
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70838
00:14:56.499   14:25:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70838
00:14:59.029  
00:14:59.029  real	0m3.777s
00:14:59.029  user	0m4.174s
00:14:59.029  sys	0m0.462s
00:14:59.029   14:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:59.029  ************************************
00:14:59.029  END TEST xnvme_rpc
00:14:59.029  ************************************
00:14:59.030   14:25:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:59.030   14:25:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:14:59.030   14:25:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:59.030   14:25:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:59.030   14:25:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:14:59.030  ************************************
00:14:59.030  START TEST xnvme_bdevperf
00:14:59.030  ************************************
00:14:59.030   14:25:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:14:59.030   14:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:14:59.030   14:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio
00:14:59.030   14:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:14:59.030   14:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:14:59.030    14:25:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:14:59.030    14:25:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:14:59.030    14:25:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:14:59.030  {
00:14:59.030    "subsystems": [
00:14:59.030      {
00:14:59.030        "subsystem": "bdev",
00:14:59.030        "config": [
00:14:59.030          {
00:14:59.030            "params": {
00:14:59.030              "io_mechanism": "libaio",
00:14:59.030              "conserve_cpu": false,
00:14:59.030              "filename": "/dev/nvme0n1",
00:14:59.030              "name": "xnvme_bdev"
00:14:59.030            },
00:14:59.030            "method": "bdev_xnvme_create"
00:14:59.030          },
00:14:59.030          {
00:14:59.030            "method": "bdev_wait_for_examine"
00:14:59.030          }
00:14:59.030        ]
00:14:59.030      }
00:14:59.030    ]
00:14:59.030  }
00:14:59.030  [2024-11-20 14:25:37.696542] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:14:59.030  [2024-11-20 14:25:37.696726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70918 ]
00:14:59.030  [2024-11-20 14:25:37.870376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:59.030  [2024-11-20 14:25:37.984754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:59.597  Running I/O for 5 seconds...
00:15:01.469      25065.00 IOPS,    97.91 MiB/s
[2024-11-20T14:25:41.385Z]     24741.50 IOPS,    96.65 MiB/s
[2024-11-20T14:25:42.760Z]     26059.67 IOPS,   101.80 MiB/s
[2024-11-20T14:25:43.409Z]     24995.00 IOPS,    97.64 MiB/s
00:15:04.428                                                                                                  Latency(us)
00:15:04.428  
[2024-11-20T14:25:43.410Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:04.428  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:15:04.428  	 xnvme_bdev          :       5.00   24857.41      97.10       0.00     0.00    2568.52     245.76    7864.32
00:15:04.428  
[2024-11-20T14:25:43.410Z]  ===================================================================================================================
00:15:04.428  
[2024-11-20T14:25:43.410Z]  Total                       :              24857.41      97.10       0.00     0.00    2568.52     245.76    7864.32
00:15:05.799   14:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:05.800   14:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:15:05.800    14:25:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:05.800    14:25:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:05.800    14:25:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:05.800  {
00:15:05.800    "subsystems": [
00:15:05.800      {
00:15:05.800        "subsystem": "bdev",
00:15:05.800        "config": [
00:15:05.800          {
00:15:05.800            "params": {
00:15:05.800              "io_mechanism": "libaio",
00:15:05.800              "conserve_cpu": false,
00:15:05.800              "filename": "/dev/nvme0n1",
00:15:05.800              "name": "xnvme_bdev"
00:15:05.800            },
00:15:05.800            "method": "bdev_xnvme_create"
00:15:05.800          },
00:15:05.800          {
00:15:05.800            "method": "bdev_wait_for_examine"
00:15:05.800          }
00:15:05.800        ]
00:15:05.800      }
00:15:05.800    ]
00:15:05.800  }
00:15:05.800  [2024-11-20 14:25:44.457625] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:15:05.800  [2024-11-20 14:25:44.457783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71000 ]
00:15:05.800  [2024-11-20 14:25:44.630931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:05.800  [2024-11-20 14:25:44.751599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:06.366  Running I/O for 5 seconds...
00:15:08.231      19766.00 IOPS,    77.21 MiB/s
[2024-11-20T14:25:48.146Z]     20741.00 IOPS,    81.02 MiB/s
[2024-11-20T14:25:49.520Z]     20729.67 IOPS,    80.98 MiB/s
[2024-11-20T14:25:50.455Z]     21064.75 IOPS,    82.28 MiB/s
00:15:11.473                                                                                                  Latency(us)
00:15:11.473  
[2024-11-20T14:25:50.455Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:11.473  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:15:11.473  	 xnvme_bdev          :       5.00   21268.04      83.08       0.00     0.00    3000.90     562.27   13166.78
00:15:11.473  
[2024-11-20T14:25:50.455Z]  ===================================================================================================================
00:15:11.473  
[2024-11-20T14:25:50.455Z]  Total                       :              21268.04      83.08       0.00     0.00    3000.90     562.27   13166.78
00:15:12.407  
00:15:12.407  real	0m13.555s
00:15:12.407  user	0m5.371s
00:15:12.407  sys	0m5.857s
00:15:12.407   14:25:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:12.407   14:25:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:12.407  ************************************
00:15:12.407  END TEST xnvme_bdevperf
00:15:12.407  ************************************
00:15:12.407   14:25:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:15:12.407   14:25:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:12.407   14:25:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:12.407   14:25:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:12.407  ************************************
00:15:12.407  START TEST xnvme_fio_plugin
00:15:12.407  ************************************
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:12.407    14:25:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:12.407    14:25:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:12.407    14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:12.407    14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:12.407    14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:12.407    14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:12.407   14:25:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:12.407  {
00:15:12.407    "subsystems": [
00:15:12.407      {
00:15:12.407        "subsystem": "bdev",
00:15:12.407        "config": [
00:15:12.407          {
00:15:12.407            "params": {
00:15:12.407              "io_mechanism": "libaio",
00:15:12.407              "conserve_cpu": false,
00:15:12.407              "filename": "/dev/nvme0n1",
00:15:12.407              "name": "xnvme_bdev"
00:15:12.407            },
00:15:12.407            "method": "bdev_xnvme_create"
00:15:12.407          },
00:15:12.407          {
00:15:12.407            "method": "bdev_wait_for_examine"
00:15:12.407          }
00:15:12.407        ]
00:15:12.407      }
00:15:12.407    ]
00:15:12.407  }
00:15:12.681  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:12.681  fio-3.35
00:15:12.681  Starting 1 thread
00:15:19.282  
00:15:19.282  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71120: Wed Nov 20 14:25:57 2024
00:15:19.282    read: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(427MiB/5001msec)
00:15:19.282      slat (usec): min=5, max=3252, avg=40.34, stdev=34.75
00:15:19.282      clat (usec): min=105, max=8819, avg=1615.67, stdev=1030.85
00:15:19.282       lat (usec): min=149, max=8859, avg=1656.01, stdev=1037.27
00:15:19.282      clat percentiles (usec):
00:15:19.282       |  1.00th=[  243],  5.00th=[  383], 10.00th=[  498], 20.00th=[  709],
00:15:19.282       | 30.00th=[  906], 40.00th=[ 1106], 50.00th=[ 1336], 60.00th=[ 1647],
00:15:19.282       | 70.00th=[ 2057], 80.00th=[ 2540], 90.00th=[ 3130], 95.00th=[ 3556],
00:15:19.282       | 99.00th=[ 4178], 99.50th=[ 4424], 99.90th=[ 7898], 99.95th=[ 8225],
00:15:19.282       | 99.99th=[ 8586]
00:15:19.282     bw (  KiB/s): min=66080, max=122256, per=100.00%, avg=88347.33, stdev=20572.93, samples=9
00:15:19.282     iops        : min=16520, max=30564, avg=22086.78, stdev=5143.27, samples=9
00:15:19.282    lat (usec)   : 250=1.12%, 500=8.96%, 750=11.94%, 1000=12.83%
00:15:19.282    lat (msec)   : 2=33.88%, 4=29.52%, 10=1.75%
00:15:19.282    cpu          : usr=26.56%, sys=54.06%, ctx=82, majf=0, minf=764
00:15:19.282    IO depths    : 1=0.2%, 2=1.9%, 4=5.1%, 8=11.6%, 16=25.5%, 32=54.1%, >=64=1.7%
00:15:19.282       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:19.282       complete  : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0%
00:15:19.282       issued rwts: total=109322,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:19.282       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:19.282  
00:15:19.282  Run status group 0 (all jobs):
00:15:19.282     READ: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=427MiB (448MB), run=5001-5001msec
00:15:19.848  -----------------------------------------------------
00:15:19.848  Suppressions used:
00:15:19.848    count      bytes template
00:15:19.848        1         11 /usr/src/fio/parse.c
00:15:19.848        1          8 libtcmalloc_minimal.so
00:15:19.848        1        904 libcrypto.so
00:15:19.848  -----------------------------------------------------
00:15:19.848  
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:19.848    14:25:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:19.848    14:25:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:19.848    14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:19.848    14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:19.848    14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:19.848    14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:19.848   14:25:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:19.848  {
00:15:19.848    "subsystems": [
00:15:19.848      {
00:15:19.848        "subsystem": "bdev",
00:15:19.848        "config": [
00:15:19.848          {
00:15:19.848            "params": {
00:15:19.848              "io_mechanism": "libaio",
00:15:19.848              "conserve_cpu": false,
00:15:19.848              "filename": "/dev/nvme0n1",
00:15:19.848              "name": "xnvme_bdev"
00:15:19.848            },
00:15:19.848            "method": "bdev_xnvme_create"
00:15:19.848          },
00:15:19.848          {
00:15:19.848            "method": "bdev_wait_for_examine"
00:15:19.848          }
00:15:19.848        ]
00:15:19.848      }
00:15:19.848    ]
00:15:19.848  }
00:15:20.107  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:20.107  fio-3.35
00:15:20.107  Starting 1 thread
00:15:26.664  
00:15:26.664  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71219: Wed Nov 20 14:26:04 2024
00:15:26.664    write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(433MiB/5001msec); 0 zone resets
00:15:26.664      slat (usec): min=5, max=4757, avg=39.84, stdev=37.85
00:15:26.664      clat (usec): min=121, max=7876, avg=1586.90, stdev=951.33
00:15:26.664       lat (usec): min=163, max=7919, avg=1626.75, stdev=957.02
00:15:26.664      clat percentiles (usec):
00:15:26.664       |  1.00th=[  255],  5.00th=[  392], 10.00th=[  510], 20.00th=[  725],
00:15:26.664       | 30.00th=[  930], 40.00th=[ 1139], 50.00th=[ 1369], 60.00th=[ 1647],
00:15:26.664       | 70.00th=[ 1991], 80.00th=[ 2442], 90.00th=[ 3032], 95.00th=[ 3425],
00:15:26.664       | 99.00th=[ 4080], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4883],
00:15:26.664       | 99.99th=[ 7046]
00:15:26.664     bw (  KiB/s): min=72280, max=103360, per=100.00%, avg=89359.33, stdev=12458.22, samples=9
00:15:26.664     iops        : min=18070, max=25840, avg=22339.78, stdev=3114.63, samples=9
00:15:26.664    lat (usec)   : 250=0.92%, 500=8.72%, 750=11.44%, 1000=12.17%
00:15:26.664    lat (msec)   : 2=37.05%, 4=28.45%, 10=1.24%
00:15:26.664    cpu          : usr=25.48%, sys=53.04%, ctx=91, majf=0, minf=765
00:15:26.664    IO depths    : 1=0.2%, 2=1.8%, 4=5.0%, 8=11.6%, 16=25.6%, 32=54.0%, >=64=1.7%
00:15:26.664       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:26.664       complete  : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0%
00:15:26.664       issued rwts: total=0,110886,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:26.664       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:26.664  
00:15:26.664  Run status group 0 (all jobs):
00:15:26.664    WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=433MiB (454MB), run=5001-5001msec
00:15:27.230  -----------------------------------------------------
00:15:27.230  Suppressions used:
00:15:27.230    count      bytes template
00:15:27.230        1         11 /usr/src/fio/parse.c
00:15:27.230        1          8 libtcmalloc_minimal.so
00:15:27.230        1        904 libcrypto.so
00:15:27.230  -----------------------------------------------------
00:15:27.230  
00:15:27.230  ************************************
00:15:27.230  END TEST xnvme_fio_plugin
00:15:27.230  ************************************
00:15:27.230  
00:15:27.230  real	0m14.908s
00:15:27.230  user	0m6.511s
00:15:27.230  sys	0m6.009s
00:15:27.230   14:26:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:27.230   14:26:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:27.230   14:26:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:15:27.230   14:26:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:15:27.230   14:26:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:15:27.230   14:26:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:15:27.230   14:26:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:27.230   14:26:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:27.230   14:26:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:27.230  ************************************
00:15:27.230  START TEST xnvme_rpc
00:15:27.230  ************************************
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71306
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71306
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71306 ']'
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:27.230  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:27.230   14:26:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:27.488  [2024-11-20 14:26:06.246139] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:15:27.489  [2024-11-20 14:26:06.246495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ]
00:15:27.489  [2024-11-20 14:26:06.425099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:27.747  [2024-11-20 14:26:06.615001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:28.681  xnvme_bdev
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]]
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71306
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71306 ']'
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71306
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:15:28.681   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:28.681    14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71306
00:15:28.939  killing process with pid 71306
00:15:28.939   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:28.940   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:28.940   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71306'
00:15:28.940   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71306
00:15:28.940   14:26:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71306
00:15:31.471  
00:15:31.471  real	0m3.777s
00:15:31.471  user	0m4.208s
00:15:31.471  sys	0m0.447s
00:15:31.471   14:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:31.471  ************************************
00:15:31.471  END TEST xnvme_rpc
00:15:31.471  ************************************
00:15:31.471   14:26:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:31.471   14:26:09 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:15:31.471   14:26:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:31.471   14:26:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:31.471   14:26:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:31.471  ************************************
00:15:31.471  START TEST xnvme_bdevperf
00:15:31.471  ************************************
00:15:31.471   14:26:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:15:31.471   14:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:15:31.471   14:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio
00:15:31.471   14:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:31.471   14:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:15:31.471    14:26:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:31.471    14:26:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:31.471    14:26:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:31.471  {
00:15:31.471    "subsystems": [
00:15:31.471      {
00:15:31.471        "subsystem": "bdev",
00:15:31.471        "config": [
00:15:31.471          {
00:15:31.471            "params": {
00:15:31.471              "io_mechanism": "libaio",
00:15:31.471              "conserve_cpu": true,
00:15:31.471              "filename": "/dev/nvme0n1",
00:15:31.471              "name": "xnvme_bdev"
00:15:31.471            },
00:15:31.471            "method": "bdev_xnvme_create"
00:15:31.471          },
00:15:31.471          {
00:15:31.471            "method": "bdev_wait_for_examine"
00:15:31.471          }
00:15:31.471        ]
00:15:31.471      }
00:15:31.471    ]
00:15:31.471  }
00:15:31.471  [2024-11-20 14:26:10.060698] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:15:31.471  [2024-11-20 14:26:10.061205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71387 ]
00:15:31.471  [2024-11-20 14:26:10.258131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:31.471  [2024-11-20 14:26:10.384556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:32.036  Running I/O for 5 seconds...
00:15:33.905      21481.00 IOPS,    83.91 MiB/s
[2024-11-20T14:26:13.833Z]     19498.50 IOPS,    76.17 MiB/s
[2024-11-20T14:26:14.768Z]     18871.00 IOPS,    73.71 MiB/s
[2024-11-20T14:26:16.142Z]     19563.50 IOPS,    76.42 MiB/s
00:15:37.160                                                                                                  Latency(us)
00:15:37.160  
[2024-11-20T14:26:16.142Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:37.160  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:15:37.160  	 xnvme_bdev          :       5.01   20708.73      80.89       0.00     0.00    3082.22     258.79    9055.88
00:15:37.160  
[2024-11-20T14:26:16.142Z]  ===================================================================================================================
00:15:37.160  
[2024-11-20T14:26:16.142Z]  Total                       :              20708.73      80.89       0.00     0.00    3082.22     258.79    9055.88
00:15:38.094   14:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:38.094   14:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:15:38.094    14:26:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:15:38.094    14:26:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:15:38.094    14:26:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:38.094  {
00:15:38.094    "subsystems": [
00:15:38.094      {
00:15:38.094        "subsystem": "bdev",
00:15:38.094        "config": [
00:15:38.094          {
00:15:38.094            "params": {
00:15:38.094              "io_mechanism": "libaio",
00:15:38.094              "conserve_cpu": true,
00:15:38.094              "filename": "/dev/nvme0n1",
00:15:38.094              "name": "xnvme_bdev"
00:15:38.094            },
00:15:38.094            "method": "bdev_xnvme_create"
00:15:38.094          },
00:15:38.094          {
00:15:38.094            "method": "bdev_wait_for_examine"
00:15:38.094          }
00:15:38.094        ]
00:15:38.095      }
00:15:38.095    ]
00:15:38.095  }
00:15:38.095  [2024-11-20 14:26:16.947000] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:15:38.095  [2024-11-20 14:26:16.947488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71463 ]
00:15:38.353  [2024-11-20 14:26:17.160057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:38.353  [2024-11-20 14:26:17.268452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:38.920  Running I/O for 5 seconds...
00:15:40.843      22251.00 IOPS,    86.92 MiB/s
[2024-11-20T14:26:20.759Z]     22320.50 IOPS,    87.19 MiB/s
[2024-11-20T14:26:21.695Z]     22894.67 IOPS,    89.43 MiB/s
[2024-11-20T14:26:22.628Z]     22574.25 IOPS,    88.18 MiB/s
00:15:43.646                                                                                                  Latency(us)
00:15:43.646  
[2024-11-20T14:26:22.628Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:15:43.646  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:15:43.646  	 xnvme_bdev          :       5.00   21838.92      85.31       0.00     0.00    2923.24     255.07    6911.07
00:15:43.646  
[2024-11-20T14:26:22.628Z]  ===================================================================================================================
00:15:43.646  
[2024-11-20T14:26:22.628Z]  Total                       :              21838.92      85.31       0.00     0.00    2923.24     255.07    6911.07
00:15:45.020  
00:15:45.020  real	0m13.687s
00:15:45.020  user	0m5.288s
00:15:45.020  sys	0m5.984s
00:15:45.020  ************************************
00:15:45.020  END TEST xnvme_bdevperf
00:15:45.020  ************************************
00:15:45.020   14:26:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:45.020   14:26:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:15:45.020   14:26:23 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:15:45.020   14:26:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:45.020   14:26:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:45.020   14:26:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:45.020  ************************************
00:15:45.020  START TEST xnvme_fio_plugin
00:15:45.020  ************************************
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:45.020    14:26:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:45.020    14:26:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:45.020    14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:45.020    14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:45.020    14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:45.020    14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:45.020   14:26:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:45.020  {
00:15:45.020    "subsystems": [
00:15:45.020      {
00:15:45.020        "subsystem": "bdev",
00:15:45.020        "config": [
00:15:45.020          {
00:15:45.020            "params": {
00:15:45.020              "io_mechanism": "libaio",
00:15:45.020              "conserve_cpu": true,
00:15:45.020              "filename": "/dev/nvme0n1",
00:15:45.020              "name": "xnvme_bdev"
00:15:45.020            },
00:15:45.020            "method": "bdev_xnvme_create"
00:15:45.020          },
00:15:45.020          {
00:15:45.020            "method": "bdev_wait_for_examine"
00:15:45.020          }
00:15:45.020        ]
00:15:45.021      }
00:15:45.021    ]
00:15:45.021  }
00:15:45.021  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:45.021  fio-3.35
00:15:45.021  Starting 1 thread
00:15:51.645  
00:15:51.645  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71582: Wed Nov 20 14:26:29 2024
00:15:51.645    read: IOPS=25.1k, BW=98.2MiB/s (103MB/s)(491MiB/5001msec)
00:15:51.645      slat (usec): min=5, max=1512, avg=35.36, stdev=31.74
00:15:51.645      clat (usec): min=118, max=6230, avg=1406.83, stdev=853.44
00:15:51.645       lat (usec): min=169, max=6282, avg=1442.19, stdev=859.08
00:15:51.645      clat percentiles (usec):
00:15:51.645       |  1.00th=[  231],  5.00th=[  355], 10.00th=[  461], 20.00th=[  660],
00:15:51.645       | 30.00th=[  840], 40.00th=[ 1012], 50.00th=[ 1205], 60.00th=[ 1418],
00:15:51.645       | 70.00th=[ 1729], 80.00th=[ 2147], 90.00th=[ 2671], 95.00th=[ 3097],
00:15:51.645       | 99.00th=[ 3785], 99.50th=[ 4015], 99.90th=[ 4424], 99.95th=[ 4555],
00:15:51.645       | 99.99th=[ 4948]
00:15:51.645     bw (  KiB/s): min=82656, max=109968, per=98.69%, avg=99247.11, stdev=9515.52, samples=9
00:15:51.645     iops        : min=20664, max=27492, avg=24811.78, stdev=2378.88, samples=9
00:15:51.645    lat (usec)   : 250=1.43%, 500=10.45%, 750=13.14%, 1000=14.27%
00:15:51.645    lat (msec)   : 2=37.68%, 4=22.54%, 10=0.50%
00:15:51.645    cpu          : usr=26.18%, sys=52.70%, ctx=105, majf=0, minf=764
00:15:51.645    IO depths    : 1=0.2%, 2=1.7%, 4=5.0%, 8=11.6%, 16=25.4%, 32=54.4%, >=64=1.7%
00:15:51.645       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:51.645       complete  : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0%
00:15:51.645       issued rwts: total=125728,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:51.645       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:51.645  
00:15:51.645  Run status group 0 (all jobs):
00:15:51.645     READ: bw=98.2MiB/s (103MB/s), 98.2MiB/s-98.2MiB/s (103MB/s-103MB/s), io=491MiB (515MB), run=5001-5001msec
00:15:52.210  -----------------------------------------------------
00:15:52.210  Suppressions used:
00:15:52.210    count      bytes template
00:15:52.210        1         11 /usr/src/fio/parse.c
00:15:52.210        1          8 libtcmalloc_minimal.so
00:15:52.210        1        904 libcrypto.so
00:15:52.210  -----------------------------------------------------
00:15:52.210  
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:15:52.210    14:26:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:52.210    14:26:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:52.210    14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:52.210    14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:15:52.210    14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:15:52.210    14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:15:52.210   14:26:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:15:52.210  {
00:15:52.210    "subsystems": [
00:15:52.210      {
00:15:52.210        "subsystem": "bdev",
00:15:52.210        "config": [
00:15:52.210          {
00:15:52.210            "params": {
00:15:52.210              "io_mechanism": "libaio",
00:15:52.210              "conserve_cpu": true,
00:15:52.210              "filename": "/dev/nvme0n1",
00:15:52.210              "name": "xnvme_bdev"
00:15:52.210            },
00:15:52.210            "method": "bdev_xnvme_create"
00:15:52.210          },
00:15:52.210          {
00:15:52.210            "method": "bdev_wait_for_examine"
00:15:52.210          }
00:15:52.210        ]
00:15:52.210      }
00:15:52.210    ]
00:15:52.210  }
00:15:52.469  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:15:52.469  fio-3.35
00:15:52.469  Starting 1 thread
00:15:59.100  
00:15:59.100  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71680: Wed Nov 20 14:26:37 2024
00:15:59.100    write: IOPS=25.7k, BW=100MiB/s (105MB/s)(502MiB/5001msec); 0 zone resets
00:15:59.100      slat (usec): min=5, max=3377, avg=34.23, stdev=32.90
00:15:59.100      clat (usec): min=119, max=7033, avg=1387.73, stdev=835.97
00:15:59.100       lat (usec): min=168, max=7095, avg=1421.96, stdev=841.28
00:15:59.100      clat percentiles (usec):
00:15:59.100       |  1.00th=[  239],  5.00th=[  367], 10.00th=[  469], 20.00th=[  660],
00:15:59.100       | 30.00th=[  832], 40.00th=[ 1012], 50.00th=[ 1188], 60.00th=[ 1418],
00:15:59.100       | 70.00th=[ 1696], 80.00th=[ 2057], 90.00th=[ 2606], 95.00th=[ 3032],
00:15:59.100       | 99.00th=[ 3818], 99.50th=[ 4080], 99.90th=[ 4621], 99.95th=[ 4948],
00:15:59.100       | 99.99th=[ 5669]
00:15:59.100     bw (  KiB/s): min=86744, max=117904, per=98.62%, avg=101414.22, stdev=12142.18, samples=9
00:15:59.100     iops        : min=21686, max=29476, avg=25353.56, stdev=3035.55, samples=9
00:15:59.100    lat (usec)   : 250=1.22%, 500=10.23%, 750=13.68%, 1000=14.39%
00:15:59.100    lat (msec)   : 2=39.05%, 4=20.81%, 10=0.62%
00:15:59.100    cpu          : usr=27.46%, sys=51.50%, ctx=81, majf=0, minf=765
00:15:59.100    IO depths    : 1=0.1%, 2=1.6%, 4=4.8%, 8=11.5%, 16=25.5%, 32=54.7%, >=64=1.7%
00:15:59.100       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:59.100       complete  : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0%
00:15:59.100       issued rwts: total=0,128562,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:59.100       latency   : target=0, window=0, percentile=100.00%, depth=64
00:15:59.100  
00:15:59.100  Run status group 0 (all jobs):
00:15:59.100    WRITE: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=502MiB (527MB), run=5001-5001msec
00:15:59.667  -----------------------------------------------------
00:15:59.667  Suppressions used:
00:15:59.667    count      bytes template
00:15:59.667        1         11 /usr/src/fio/parse.c
00:15:59.667        1          8 libtcmalloc_minimal.so
00:15:59.667        1        904 libcrypto.so
00:15:59.667  -----------------------------------------------------
00:15:59.667  
00:15:59.667  ************************************
00:15:59.667  END TEST xnvme_fio_plugin
00:15:59.667  ************************************
00:15:59.667  
00:15:59.667  real	0m14.807s
00:15:59.667  user	0m6.506s
00:15:59.667  sys	0m5.855s
00:15:59.667   14:26:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:59.667   14:26:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:15:59.667   14:26:38 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:15:59.667   14:26:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:59.667   14:26:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:59.667   14:26:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:15:59.667  ************************************
00:15:59.667  START TEST xnvme_rpc
00:15:59.667  ************************************
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71766
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71766
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71766 ']'
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:59.667  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:59.667   14:26:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:59.926  [2024-11-20 14:26:38.648824] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:15:59.926  [2024-11-20 14:26:38.648998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ]
00:15:59.926  [2024-11-20 14:26:38.830219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:00.185  [2024-11-20 14:26:38.954502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring ''
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:01.119  xnvme_bdev
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]]
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71766
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71766 ']'
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71766
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:16:01.119   14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:01.119    14:26:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71766
00:16:01.119   14:26:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:01.119  killing process with pid 71766
00:16:01.120   14:26:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:01.120   14:26:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71766'
00:16:01.120   14:26:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71766
00:16:01.120   14:26:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71766
00:16:03.649  
00:16:03.649  real	0m3.591s
00:16:03.649  user	0m3.823s
00:16:03.649  sys	0m0.438s
00:16:03.649   14:26:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:03.649   14:26:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:03.649  ************************************
00:16:03.649  END TEST xnvme_rpc
00:16:03.649  ************************************
00:16:03.649   14:26:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:16:03.649   14:26:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:03.649   14:26:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:03.649   14:26:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:03.649  ************************************
00:16:03.649  START TEST xnvme_bdevperf
00:16:03.649  ************************************
00:16:03.649   14:26:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:16:03.649   14:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:16:03.649   14:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring
00:16:03.649   14:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:03.649   14:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:16:03.649    14:26:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:03.649    14:26:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:03.649    14:26:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:03.649  {
00:16:03.649    "subsystems": [
00:16:03.649      {
00:16:03.649        "subsystem": "bdev",
00:16:03.649        "config": [
00:16:03.649          {
00:16:03.649            "params": {
00:16:03.649              "io_mechanism": "io_uring",
00:16:03.649              "conserve_cpu": false,
00:16:03.649              "filename": "/dev/nvme0n1",
00:16:03.649              "name": "xnvme_bdev"
00:16:03.649            },
00:16:03.649            "method": "bdev_xnvme_create"
00:16:03.649          },
00:16:03.649          {
00:16:03.649            "method": "bdev_wait_for_examine"
00:16:03.649          }
00:16:03.649        ]
00:16:03.649      }
00:16:03.649    ]
00:16:03.649  }
00:16:03.649  [2024-11-20 14:26:42.276867] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:16:03.649  [2024-11-20 14:26:42.277023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71846 ]
00:16:03.649  [2024-11-20 14:26:42.448250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:03.649  [2024-11-20 14:26:42.550921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:03.908  Running I/O for 5 seconds...
00:16:06.211      48166.00 IOPS,   188.15 MiB/s
[2024-11-20T14:26:46.126Z]     47898.00 IOPS,   187.10 MiB/s
[2024-11-20T14:26:47.120Z]     45904.33 IOPS,   179.31 MiB/s
[2024-11-20T14:26:48.070Z]     45676.75 IOPS,   178.42 MiB/s
00:16:09.088                                                                                                  Latency(us)
00:16:09.088  
[2024-11-20T14:26:48.070Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:09.088  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:16:09.088  	 xnvme_bdev          :       5.00   45512.62     177.78       0.00     0.00    1401.42     392.84    5749.29
00:16:09.088  
[2024-11-20T14:26:48.070Z]  ===================================================================================================================
00:16:09.088  
[2024-11-20T14:26:48.070Z]  Total                       :              45512.62     177.78       0.00     0.00    1401.42     392.84    5749.29
00:16:10.022   14:26:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:10.022   14:26:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:16:10.022    14:26:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:10.022    14:26:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:10.022    14:26:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:10.022  {
00:16:10.022    "subsystems": [
00:16:10.022      {
00:16:10.022        "subsystem": "bdev",
00:16:10.022        "config": [
00:16:10.022          {
00:16:10.022            "params": {
00:16:10.022              "io_mechanism": "io_uring",
00:16:10.022              "conserve_cpu": false,
00:16:10.022              "filename": "/dev/nvme0n1",
00:16:10.022              "name": "xnvme_bdev"
00:16:10.022            },
00:16:10.022            "method": "bdev_xnvme_create"
00:16:10.022          },
00:16:10.022          {
00:16:10.022            "method": "bdev_wait_for_examine"
00:16:10.022          }
00:16:10.022        ]
00:16:10.022      }
00:16:10.022    ]
00:16:10.022  }
00:16:10.022  [2024-11-20 14:26:48.994303] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:16:10.022  [2024-11-20 14:26:48.994457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71921 ]
00:16:10.280  [2024-11-20 14:26:49.169360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:10.538  [2024-11-20 14:26:49.292088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:10.796  Running I/O for 5 seconds...
00:16:13.114      41024.00 IOPS,   160.25 MiB/s
[2024-11-20T14:26:53.042Z]     42880.00 IOPS,   167.50 MiB/s
[2024-11-20T14:26:53.976Z]     43050.33 IOPS,   168.17 MiB/s
[2024-11-20T14:26:54.913Z]     43759.75 IOPS,   170.94 MiB/s
00:16:15.931                                                                                                  Latency(us)
00:16:15.931  
[2024-11-20T14:26:54.913Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:15.931  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:16:15.931  	 xnvme_bdev          :       5.00   43488.28     169.88       0.00     0.00    1466.59     916.01    7119.59
00:16:15.931  
[2024-11-20T14:26:54.913Z]  ===================================================================================================================
00:16:15.931  
[2024-11-20T14:26:54.913Z]  Total                       :              43488.28     169.88       0.00     0.00    1466.59     916.01    7119.59
00:16:16.867  
00:16:16.867  real	0m13.553s
00:16:16.867  user	0m6.893s
00:16:16.867  sys	0m6.427s
00:16:16.867   14:26:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:16.867   14:26:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:16.867  ************************************
00:16:16.867  END TEST xnvme_bdevperf
00:16:16.867  ************************************
00:16:16.867   14:26:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:16:16.867   14:26:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:16.867   14:26:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:16.867   14:26:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:16.867  ************************************
00:16:16.867  START TEST xnvme_fio_plugin
00:16:16.867  ************************************
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:16.867    14:26:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:16.867    14:26:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:16.867    14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:16.867    14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:16.867    14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:16.867    14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:16.867   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:16.868   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:16.868   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:16.868   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:16.868   14:26:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:16.868  {
00:16:16.868    "subsystems": [
00:16:16.868      {
00:16:16.868        "subsystem": "bdev",
00:16:16.868        "config": [
00:16:16.868          {
00:16:16.868            "params": {
00:16:16.868              "io_mechanism": "io_uring",
00:16:16.868              "conserve_cpu": false,
00:16:16.868              "filename": "/dev/nvme0n1",
00:16:16.868              "name": "xnvme_bdev"
00:16:16.868            },
00:16:16.868            "method": "bdev_xnvme_create"
00:16:16.868          },
00:16:16.868          {
00:16:16.868            "method": "bdev_wait_for_examine"
00:16:16.868          }
00:16:16.868        ]
00:16:16.868      }
00:16:16.868    ]
00:16:16.868  }
00:16:17.127  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:17.127  fio-3.35
00:16:17.127  Starting 1 thread
00:16:23.689  
00:16:23.689  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72046: Wed Nov 20 14:27:01 2024
00:16:23.689    read: IOPS=43.8k, BW=171MiB/s (179MB/s)(856MiB/5001msec)
00:16:23.689      slat (nsec): min=3145, max=66771, avg=4808.65, stdev=2735.87
00:16:23.689      clat (usec): min=194, max=5975, avg=1269.71, stdev=317.43
00:16:23.689       lat (usec): min=203, max=5985, avg=1274.52, stdev=318.58
00:16:23.689      clat percentiles (usec):
00:16:23.689       |  1.00th=[  840],  5.00th=[  922], 10.00th=[  979], 20.00th=[ 1045],
00:16:23.689       | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1270],
00:16:23.689       | 70.00th=[ 1336], 80.00th=[ 1450], 90.00th=[ 1614], 95.00th=[ 1778],
00:16:23.689       | 99.00th=[ 2245], 99.50th=[ 2835], 99.90th=[ 4178], 99.95th=[ 4621],
00:16:23.689       | 99.99th=[ 5145]
00:16:23.689     bw (  KiB/s): min=146432, max=190464, per=100.00%, avg=176624.89, stdev=13553.43, samples=9
00:16:23.689     iops        : min=36608, max=47616, avg=44156.22, stdev=3388.36, samples=9
00:16:23.689    lat (usec)   : 250=0.01%, 500=0.05%, 750=0.12%, 1000=12.55%
00:16:23.689    lat (msec)   : 2=85.09%, 4=2.06%, 10=0.13%
00:16:23.689    cpu          : usr=39.22%, sys=59.58%, ctx=10, majf=0, minf=762
00:16:23.689    IO depths    : 1=1.4%, 2=2.9%, 4=5.9%, 8=12.3%, 16=25.1%, 32=50.7%, >=64=1.6%
00:16:23.689       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:23.689       complete  : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0%
00:16:23.689       issued rwts: total=219059,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:23.689       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:23.689  
00:16:23.689  Run status group 0 (all jobs):
00:16:23.689     READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=856MiB (897MB), run=5001-5001msec
00:16:24.257  -----------------------------------------------------
00:16:24.257  Suppressions used:
00:16:24.257    count      bytes template
00:16:24.257        1         11 /usr/src/fio/parse.c
00:16:24.257        1          8 libtcmalloc_minimal.so
00:16:24.257        1        904 libcrypto.so
00:16:24.257  -----------------------------------------------------
00:16:24.257  
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:24.257    14:27:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:24.257    14:27:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:24.257    14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:24.257   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:24.257    14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:24.257    14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:24.257    14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:24.515   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:24.515   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:24.515   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:24.515   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:24.516   14:27:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:24.516  {
00:16:24.516    "subsystems": [
00:16:24.516      {
00:16:24.516        "subsystem": "bdev",
00:16:24.516        "config": [
00:16:24.516          {
00:16:24.516            "params": {
00:16:24.516              "io_mechanism": "io_uring",
00:16:24.516              "conserve_cpu": false,
00:16:24.516              "filename": "/dev/nvme0n1",
00:16:24.516              "name": "xnvme_bdev"
00:16:24.516            },
00:16:24.516            "method": "bdev_xnvme_create"
00:16:24.516          },
00:16:24.516          {
00:16:24.516            "method": "bdev_wait_for_examine"
00:16:24.516          }
00:16:24.516        ]
00:16:24.516      }
00:16:24.516    ]
00:16:24.516  }
00:16:24.774  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:24.774  fio-3.35
00:16:24.774  Starting 1 thread
00:16:31.361  
00:16:31.361  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72138: Wed Nov 20 14:27:09 2024
00:16:31.361    write: IOPS=43.7k, BW=171MiB/s (179MB/s)(854MiB/5001msec); 0 zone resets
00:16:31.361      slat (nsec): min=3208, max=78430, avg=4802.38, stdev=2815.76
00:16:31.361      clat (usec): min=765, max=4528, avg=1271.66, stdev=273.23
00:16:31.361       lat (usec): min=768, max=4559, avg=1276.47, stdev=274.63
00:16:31.361      clat percentiles (usec):
00:16:31.361       |  1.00th=[  848],  5.00th=[  922], 10.00th=[  979], 20.00th=[ 1057],
00:16:31.361       | 30.00th=[ 1123], 40.00th=[ 1172], 50.00th=[ 1237], 60.00th=[ 1287],
00:16:31.361       | 70.00th=[ 1369], 80.00th=[ 1467], 90.00th=[ 1598], 95.00th=[ 1729],
00:16:31.361       | 99.00th=[ 2147], 99.50th=[ 2311], 99.90th=[ 3425], 99.95th=[ 4015],
00:16:31.361       | 99.99th=[ 4424]
00:16:31.361     bw (  KiB/s): min=151552, max=196608, per=100.00%, avg=176483.89, stdev=13917.94, samples=9
00:16:31.361     iops        : min=37888, max=49152, avg=44120.89, stdev=3479.39, samples=9
00:16:31.361    lat (usec)   : 1000=12.56%
00:16:31.361    lat (msec)   : 2=85.82%, 4=1.57%, 10=0.05%
00:16:31.361    cpu          : usr=38.66%, sys=60.18%, ctx=14, majf=0, minf=763
00:16:31.361    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:16:31.361       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:31.361       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:16:31.361       issued rwts: total=0,218560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:31.361       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:31.361  
00:16:31.361  Run status group 0 (all jobs):
00:16:31.361    WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=854MiB (895MB), run=5001-5001msec
00:16:31.619  -----------------------------------------------------
00:16:31.619  Suppressions used:
00:16:31.619    count      bytes template
00:16:31.619        1         11 /usr/src/fio/parse.c
00:16:31.619        1          8 libtcmalloc_minimal.so
00:16:31.619        1        904 libcrypto.so
00:16:31.619  -----------------------------------------------------
00:16:31.619  
00:16:31.619  ************************************
00:16:31.619  END TEST xnvme_fio_plugin
00:16:31.619  ************************************
00:16:31.619  
00:16:31.619  real	0m14.807s
00:16:31.619  user	0m7.798s
00:16:31.619  sys	0m6.613s
00:16:31.619   14:27:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:31.619   14:27:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:31.619   14:27:10 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:16:31.619   14:27:10 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:16:31.619   14:27:10 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:16:31.619   14:27:10 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:16:31.619   14:27:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:31.619   14:27:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:31.619   14:27:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:31.619  ************************************
00:16:31.619  START TEST xnvme_rpc
00:16:31.619  ************************************
00:16:31.619   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:16:31.619   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:16:31.619   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:16:31.877  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72225
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72225
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72225 ']'
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:31.877   14:27:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:31.877  [2024-11-20 14:27:10.748314] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:16:31.877  [2024-11-20 14:27:10.748776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72225 ]
00:16:32.135  [2024-11-20 14:27:10.940629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:32.135  [2024-11-20 14:27:11.046034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:33.070  xnvme_bdev
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.070   14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.070    14:27:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:33.070    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.070   14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]]
00:16:33.070    14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:16:33.070    14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:33.070    14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:16:33.070    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.070    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]]
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72225
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72225 ']'
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72225
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:33.329    14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72225
00:16:33.329  killing process with pid 72225
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72225'
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72225
00:16:33.329   14:27:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72225
00:16:35.860  
00:16:35.860  real	0m3.705s
00:16:35.860  user	0m4.114s
00:16:35.860  sys	0m0.478s
00:16:35.860  ************************************
00:16:35.860  END TEST xnvme_rpc
00:16:35.860  ************************************
00:16:35.860   14:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:35.860   14:27:14 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:35.860   14:27:14 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:16:35.860   14:27:14 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:35.860   14:27:14 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:35.860   14:27:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:35.860  ************************************
00:16:35.860  START TEST xnvme_bdevperf
00:16:35.860  ************************************
00:16:35.860   14:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:16:35.860   14:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:16:35.860   14:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring
00:16:35.860   14:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:35.860   14:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:16:35.860    14:27:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:35.860    14:27:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:35.860    14:27:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:35.860  {
00:16:35.860    "subsystems": [
00:16:35.860      {
00:16:35.860        "subsystem": "bdev",
00:16:35.860        "config": [
00:16:35.860          {
00:16:35.860            "params": {
00:16:35.860              "io_mechanism": "io_uring",
00:16:35.860              "conserve_cpu": true,
00:16:35.860              "filename": "/dev/nvme0n1",
00:16:35.860              "name": "xnvme_bdev"
00:16:35.860            },
00:16:35.860            "method": "bdev_xnvme_create"
00:16:35.860          },
00:16:35.860          {
00:16:35.860            "method": "bdev_wait_for_examine"
00:16:35.860          }
00:16:35.860        ]
00:16:35.860      }
00:16:35.860    ]
00:16:35.860  }
00:16:35.860  [2024-11-20 14:27:14.439191] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:16:35.860  [2024-11-20 14:27:14.439535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72304 ]
00:16:35.860  [2024-11-20 14:27:14.615529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:35.860  [2024-11-20 14:27:14.719715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:36.118  Running I/O for 5 seconds...
00:16:38.426      47870.00 IOPS,   186.99 MiB/s
[2024-11-20T14:27:18.341Z]     46974.50 IOPS,   183.49 MiB/s
[2024-11-20T14:27:19.276Z]     47060.33 IOPS,   183.83 MiB/s
[2024-11-20T14:27:20.210Z]     46783.25 IOPS,   182.75 MiB/s
00:16:41.228                                                                                                  Latency(us)
00:16:41.228  
[2024-11-20T14:27:20.210Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:41.228  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:16:41.228  	 xnvme_bdev          :       5.00   46484.15     181.58       0.00     0.00    1372.41     714.94    7506.85
00:16:41.228  
[2024-11-20T14:27:20.210Z]  ===================================================================================================================
00:16:41.228  
[2024-11-20T14:27:20.210Z]  Total                       :              46484.15     181.58       0.00     0.00    1372.41     714.94    7506.85
00:16:42.161   14:27:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:42.161   14:27:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:16:42.161    14:27:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:16:42.161    14:27:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:16:42.161    14:27:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:42.420  {
00:16:42.420    "subsystems": [
00:16:42.420      {
00:16:42.420        "subsystem": "bdev",
00:16:42.420        "config": [
00:16:42.420          {
00:16:42.420            "params": {
00:16:42.420              "io_mechanism": "io_uring",
00:16:42.420              "conserve_cpu": true,
00:16:42.420              "filename": "/dev/nvme0n1",
00:16:42.420              "name": "xnvme_bdev"
00:16:42.420            },
00:16:42.420            "method": "bdev_xnvme_create"
00:16:42.420          },
00:16:42.420          {
00:16:42.420            "method": "bdev_wait_for_examine"
00:16:42.420          }
00:16:42.420        ]
00:16:42.420      }
00:16:42.420    ]
00:16:42.420  }
00:16:42.420  [2024-11-20 14:27:21.196604] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:16:42.420  [2024-11-20 14:27:21.196921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72388 ]
00:16:42.420  [2024-11-20 14:27:21.381815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:42.678  [2024-11-20 14:27:21.499356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:42.936  Running I/O for 5 seconds...
00:16:45.242      42304.00 IOPS,   165.25 MiB/s
[2024-11-20T14:27:25.159Z]     42656.00 IOPS,   166.62 MiB/s
[2024-11-20T14:27:26.093Z]     43541.33 IOPS,   170.08 MiB/s
[2024-11-20T14:27:27.028Z]     43296.00 IOPS,   169.12 MiB/s
00:16:48.046                                                                                                  Latency(us)
00:16:48.046  
[2024-11-20T14:27:27.028Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:16:48.046  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:16:48.046  	 xnvme_bdev          :       5.00   43274.43     169.04       0.00     0.00    1473.50     800.58    8757.99
00:16:48.046  
[2024-11-20T14:27:27.028Z]  ===================================================================================================================
00:16:48.046  
[2024-11-20T14:27:27.028Z]  Total                       :              43274.43     169.04       0.00     0.00    1473.50     800.58    8757.99
00:16:48.990  
00:16:48.990  real	0m13.528s
00:16:48.990  user	0m8.307s
00:16:48.990  sys	0m4.680s
00:16:48.990   14:27:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:48.990  ************************************
00:16:48.990  END TEST xnvme_bdevperf
00:16:48.990  ************************************
00:16:48.990   14:27:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:16:48.990   14:27:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:16:48.991   14:27:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:16:48.991   14:27:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:48.991   14:27:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:16:48.991  ************************************
00:16:48.991  START TEST xnvme_fio_plugin
00:16:48.991  ************************************
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:48.991    14:27:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:48.991    14:27:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:48.991    14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:48.991    14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:48.991    14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:48.991    14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:48.991   14:27:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:49.249  {
00:16:49.249    "subsystems": [
00:16:49.249      {
00:16:49.249        "subsystem": "bdev",
00:16:49.249        "config": [
00:16:49.249          {
00:16:49.249            "params": {
00:16:49.249              "io_mechanism": "io_uring",
00:16:49.249              "conserve_cpu": true,
00:16:49.249              "filename": "/dev/nvme0n1",
00:16:49.249              "name": "xnvme_bdev"
00:16:49.249            },
00:16:49.249            "method": "bdev_xnvme_create"
00:16:49.249          },
00:16:49.249          {
00:16:49.249            "method": "bdev_wait_for_examine"
00:16:49.249          }
00:16:49.249        ]
00:16:49.249      }
00:16:49.249    ]
00:16:49.249  }
00:16:49.249  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:49.249  fio-3.35
00:16:49.249  Starting 1 thread
00:16:55.811  
00:16:55.811  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72503: Wed Nov 20 14:27:33 2024
00:16:55.811    read: IOPS=47.5k, BW=185MiB/s (194MB/s)(928MiB/5002msec)
00:16:55.811      slat (usec): min=3, max=1163, avg= 4.40, stdev= 3.33
00:16:55.811      clat (usec): min=765, max=4473, avg=1170.53, stdev=230.21
00:16:55.811       lat (usec): min=768, max=4479, avg=1174.93, stdev=231.40
00:16:55.811      clat percentiles (usec):
00:16:55.811       |  1.00th=[  848],  5.00th=[  906], 10.00th=[  938], 20.00th=[  996],
00:16:55.811       | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1172],
00:16:55.811       | 70.00th=[ 1237], 80.00th=[ 1319], 90.00th=[ 1467], 95.00th=[ 1598],
00:16:55.811       | 99.00th=[ 1942], 99.50th=[ 2089], 99.90th=[ 2376], 99.95th=[ 2638],
00:16:55.811       | 99.99th=[ 4359]
00:16:55.811     bw (  KiB/s): min=163840, max=208896, per=99.80%, avg=189553.78, stdev=13555.38, samples=9
00:16:55.811     iops        : min=40960, max=52224, avg=47388.44, stdev=3388.85, samples=9
00:16:55.811    lat (usec)   : 1000=21.44%
00:16:55.811    lat (msec)   : 2=77.79%, 4=0.74%, 10=0.03%
00:16:55.811    cpu          : usr=59.59%, sys=36.11%, ctx=47, majf=0, minf=762
00:16:55.811    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:16:55.811       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:16:55.811       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:16:55.811       issued rwts: total=237504,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:16:55.811       latency   : target=0, window=0, percentile=100.00%, depth=64
00:16:55.811  
00:16:55.811  Run status group 0 (all jobs):
00:16:55.811     READ: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=928MiB (973MB), run=5002-5002msec
00:16:56.377  -----------------------------------------------------
00:16:56.377  Suppressions used:
00:16:56.377    count      bytes template
00:16:56.377        1         11 /usr/src/fio/parse.c
00:16:56.377        1          8 libtcmalloc_minimal.so
00:16:56.377        1        904 libcrypto.so
00:16:56.377  -----------------------------------------------------
00:16:56.377  
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:56.377    14:27:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:56.377    14:27:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:16:56.377    14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:16:56.377    14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:16:56.377    14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:16:56.377    14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:16:56.377   14:27:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:16:56.377  {
00:16:56.377    "subsystems": [
00:16:56.377      {
00:16:56.377        "subsystem": "bdev",
00:16:56.377        "config": [
00:16:56.377          {
00:16:56.377            "params": {
00:16:56.377              "io_mechanism": "io_uring",
00:16:56.377              "conserve_cpu": true,
00:16:56.377              "filename": "/dev/nvme0n1",
00:16:56.377              "name": "xnvme_bdev"
00:16:56.377            },
00:16:56.377            "method": "bdev_xnvme_create"
00:16:56.377          },
00:16:56.377          {
00:16:56.377            "method": "bdev_wait_for_examine"
00:16:56.377          }
00:16:56.377        ]
00:16:56.377      }
00:16:56.377    ]
00:16:56.377  }
00:16:56.636  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:16:56.636  fio-3.35
00:16:56.636  Starting 1 thread
00:17:03.194  
00:17:03.194  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72603: Wed Nov 20 14:27:41 2024
00:17:03.194    write: IOPS=43.5k, BW=170MiB/s (178MB/s)(849MiB/5001msec); 0 zone resets
00:17:03.194      slat (nsec): min=3177, max=96275, avg=5140.73, stdev=3107.60
00:17:03.194      clat (usec): min=741, max=10603, avg=1269.84, stdev=356.22
00:17:03.194       lat (usec): min=745, max=10610, avg=1274.98, stdev=357.81
00:17:03.194      clat percentiles (usec):
00:17:03.194       |  1.00th=[  824],  5.00th=[  889], 10.00th=[  938], 20.00th=[ 1029],
00:17:03.194       | 30.00th=[ 1090], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1270],
00:17:03.194       | 70.00th=[ 1352], 80.00th=[ 1467], 90.00th=[ 1680], 95.00th=[ 1860],
00:17:03.194       | 99.00th=[ 2114], 99.50th=[ 2212], 99.90th=[ 3458], 99.95th=[ 7898],
00:17:03.194       | 99.99th=[10552]
00:17:03.194     bw (  KiB/s): min=147968, max=189440, per=100.00%, avg=174092.44, stdev=17311.42, samples=9
00:17:03.194     iops        : min=36992, max=47360, avg=43523.11, stdev=4327.85, samples=9
00:17:03.194    lat (usec)   : 750=0.01%, 1000=16.52%
00:17:03.194    lat (msec)   : 2=81.21%, 4=2.18%, 10=0.06%, 20=0.03%
00:17:03.194    cpu          : usr=57.76%, sys=38.44%, ctx=19, majf=0, minf=763
00:17:03.194    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:17:03.194       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:03.194       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0%
00:17:03.195       issued rwts: total=0,217358,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:03.195       latency   : target=0, window=0, percentile=100.00%, depth=64
00:17:03.195  
00:17:03.195  Run status group 0 (all jobs):
00:17:03.195    WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=849MiB (890MB), run=5001-5001msec
00:17:03.759  -----------------------------------------------------
00:17:03.759  Suppressions used:
00:17:03.759    count      bytes template
00:17:03.759        1         11 /usr/src/fio/parse.c
00:17:03.759        1          8 libtcmalloc_minimal.so
00:17:03.759        1        904 libcrypto.so
00:17:03.759  -----------------------------------------------------
00:17:03.759  
00:17:03.759  
00:17:03.759  real	0m14.720s
00:17:03.759  user	0m9.682s
00:17:03.759  sys	0m4.331s
00:17:03.759  ************************************
00:17:03.759  END TEST xnvme_fio_plugin
00:17:03.759  ************************************
00:17:03.759   14:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:03.759   14:27:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}"
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false
00:17:03.759   14:27:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:17:03.759   14:27:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:03.759   14:27:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:03.759   14:27:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:03.759  ************************************
00:17:03.759  START TEST xnvme_rpc
00:17:03.759  ************************************
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72691
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72691
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72691 ']'
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:03.759  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:03.759   14:27:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:04.017  [2024-11-20 14:27:42.801114] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:04.017  [2024-11-20 14:27:42.801445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72691 ]
00:17:04.275  [2024-11-20 14:27:43.015553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:04.275  [2024-11-20 14:27:43.158823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:05.228   14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:05.228   14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:17:05.228   14:27:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd ''
00:17:05.228   14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.228   14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:05.228  xnvme_bdev
00:17:05.228   14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.228    14:27:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:17:05.228    14:27:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:05.228    14:27:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:17:05.228    14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.228    14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:05.228    14:27:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]]
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]]
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]]
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72691
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72691 ']'
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72691
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:17:05.228   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:05.228    14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72691
00:17:05.485   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:05.485   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:05.485  killing process with pid 72691
00:17:05.485   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72691'
00:17:05.485   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72691
00:17:05.485   14:27:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72691
00:17:07.387  
00:17:07.387  real	0m3.625s
00:17:07.387  user	0m3.978s
00:17:07.387  sys	0m0.416s
00:17:07.387  ************************************
00:17:07.387  END TEST xnvme_rpc
00:17:07.387  ************************************
00:17:07.387   14:27:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:07.387   14:27:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:07.387   14:27:46 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:17:07.387   14:27:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:07.387   14:27:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:07.387   14:27:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:07.387  ************************************
00:17:07.387  START TEST xnvme_bdevperf
00:17:07.387  ************************************
00:17:07.387   14:27:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:17:07.387   14:27:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:17:07.387   14:27:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd
00:17:07.387   14:27:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:07.387   14:27:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:17:07.387    14:27:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:07.387    14:27:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:07.387    14:27:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:07.646  {
00:17:07.646    "subsystems": [
00:17:07.646      {
00:17:07.646        "subsystem": "bdev",
00:17:07.646        "config": [
00:17:07.646          {
00:17:07.646            "params": {
00:17:07.646              "io_mechanism": "io_uring_cmd",
00:17:07.646              "conserve_cpu": false,
00:17:07.646              "filename": "/dev/ng0n1",
00:17:07.646              "name": "xnvme_bdev"
00:17:07.646            },
00:17:07.646            "method": "bdev_xnvme_create"
00:17:07.646          },
00:17:07.646          {
00:17:07.646            "method": "bdev_wait_for_examine"
00:17:07.646          }
00:17:07.646        ]
00:17:07.646      }
00:17:07.646    ]
00:17:07.646  }
00:17:07.646  [2024-11-20 14:27:46.477168] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:07.646  [2024-11-20 14:27:46.477316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72772 ]
00:17:07.904  [2024-11-20 14:27:46.649228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:07.904  [2024-11-20 14:27:46.762033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:08.163  Running I/O for 5 seconds...
00:17:10.469      50880.00 IOPS,   198.75 MiB/s
[2024-11-20T14:27:50.385Z]     50944.00 IOPS,   199.00 MiB/s
[2024-11-20T14:27:51.394Z]     51882.67 IOPS,   202.67 MiB/s
[2024-11-20T14:27:52.329Z]     51856.00 IOPS,   202.56 MiB/s
00:17:13.347                                                                                                  Latency(us)
00:17:13.347  
[2024-11-20T14:27:52.329Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:13.347  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:17:13.347  	 xnvme_bdev          :       5.00   51860.67     202.58       0.00     0.00    1230.16     463.59    5749.29
00:17:13.347  
[2024-11-20T14:27:52.329Z]  ===================================================================================================================
00:17:13.347  
[2024-11-20T14:27:52.329Z]  Total                       :              51860.67     202.58       0.00     0.00    1230.16     463.59    5749.29
00:17:14.282   14:27:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:14.282   14:27:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:17:14.282    14:27:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:14.282    14:27:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:14.282    14:27:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:14.282  {
00:17:14.282    "subsystems": [
00:17:14.282      {
00:17:14.282        "subsystem": "bdev",
00:17:14.282        "config": [
00:17:14.282          {
00:17:14.282            "params": {
00:17:14.282              "io_mechanism": "io_uring_cmd",
00:17:14.282              "conserve_cpu": false,
00:17:14.282              "filename": "/dev/ng0n1",
00:17:14.282              "name": "xnvme_bdev"
00:17:14.282            },
00:17:14.282            "method": "bdev_xnvme_create"
00:17:14.282          },
00:17:14.282          {
00:17:14.282            "method": "bdev_wait_for_examine"
00:17:14.282          }
00:17:14.282        ]
00:17:14.282      }
00:17:14.282    ]
00:17:14.282  }
00:17:14.282  [2024-11-20 14:27:53.174865] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:14.282  [2024-11-20 14:27:53.175005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72846 ]
00:17:14.540  [2024-11-20 14:27:53.343607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:14.540  [2024-11-20 14:27:53.445232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:14.798  Running I/O for 5 seconds...
00:17:17.110      51968.00 IOPS,   203.00 MiB/s
[2024-11-20T14:27:57.029Z]     50464.00 IOPS,   197.12 MiB/s
[2024-11-20T14:27:57.963Z]     50880.00 IOPS,   198.75 MiB/s
[2024-11-20T14:27:58.897Z]     50336.00 IOPS,   196.62 MiB/s
00:17:19.915                                                                                                  Latency(us)
00:17:19.915  
[2024-11-20T14:27:58.897Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:19.915  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:17:19.915  	 xnvme_bdev          :       5.00   50587.27     197.61       0.00     0.00    1260.77     688.87    6881.28
00:17:19.915  
[2024-11-20T14:27:58.897Z]  ===================================================================================================================
00:17:19.915  
[2024-11-20T14:27:58.897Z]  Total                       :              50587.27     197.61       0.00     0.00    1260.77     688.87    6881.28
00:17:20.847   14:27:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:20.847   14:27:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096
00:17:20.847    14:27:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:20.847    14:27:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:20.847    14:27:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:20.847  {
00:17:20.847    "subsystems": [
00:17:20.847      {
00:17:20.847        "subsystem": "bdev",
00:17:20.847        "config": [
00:17:20.847          {
00:17:20.847            "params": {
00:17:20.847              "io_mechanism": "io_uring_cmd",
00:17:20.847              "conserve_cpu": false,
00:17:20.847              "filename": "/dev/ng0n1",
00:17:20.847              "name": "xnvme_bdev"
00:17:20.847            },
00:17:20.847            "method": "bdev_xnvme_create"
00:17:20.847          },
00:17:20.847          {
00:17:20.847            "method": "bdev_wait_for_examine"
00:17:20.847          }
00:17:20.847        ]
00:17:20.847      }
00:17:20.847    ]
00:17:20.847  }
00:17:21.105  [2024-11-20 14:27:59.880233] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:21.106  [2024-11-20 14:27:59.880405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72921 ]
00:17:21.106  [2024-11-20 14:28:00.066605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:21.364  [2024-11-20 14:28:00.196223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:21.621  Running I/O for 5 seconds...
00:17:23.567      70464.00 IOPS,   275.25 MiB/s
[2024-11-20T14:28:03.981Z]     68096.00 IOPS,   266.00 MiB/s
[2024-11-20T14:28:04.547Z]     67840.00 IOPS,   265.00 MiB/s
[2024-11-20T14:28:05.922Z]     67376.00 IOPS,   263.19 MiB/s
00:17:26.940                                                                                                  Latency(us)
00:17:26.940  
[2024-11-20T14:28:05.922Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:26.940  Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096)
00:17:26.940  	 xnvme_bdev          :       5.00   65326.22     255.18       0.00     0.00     975.20     491.52    3470.43
00:17:26.940  
[2024-11-20T14:28:05.922Z]  ===================================================================================================================
00:17:26.940  
[2024-11-20T14:28:05.922Z]  Total                       :              65326.22     255.18       0.00     0.00     975.20     491.52    3470.43
00:17:27.873   14:28:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:27.873   14:28:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096
00:17:27.873    14:28:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:27.873    14:28:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:27.873    14:28:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:27.873  {
00:17:27.873    "subsystems": [
00:17:27.873      {
00:17:27.873        "subsystem": "bdev",
00:17:27.873        "config": [
00:17:27.873          {
00:17:27.873            "params": {
00:17:27.873              "io_mechanism": "io_uring_cmd",
00:17:27.873              "conserve_cpu": false,
00:17:27.873              "filename": "/dev/ng0n1",
00:17:27.873              "name": "xnvme_bdev"
00:17:27.873            },
00:17:27.873            "method": "bdev_xnvme_create"
00:17:27.873          },
00:17:27.873          {
00:17:27.873            "method": "bdev_wait_for_examine"
00:17:27.873          }
00:17:27.873        ]
00:17:27.873      }
00:17:27.873    ]
00:17:27.873  }
00:17:27.873  [2024-11-20 14:28:06.668176] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:27.873  [2024-11-20 14:28:06.668381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72995 ]
00:17:27.873  [2024-11-20 14:28:06.846096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:28.132  [2024-11-20 14:28:06.970544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:28.389  Running I/O for 5 seconds...
00:17:30.386      35335.00 IOPS,   138.03 MiB/s
[2024-11-20T14:28:10.740Z]     36233.50 IOPS,   141.54 MiB/s
[2024-11-20T14:28:11.673Z]     36482.33 IOPS,   142.51 MiB/s
[2024-11-20T14:28:12.626Z]     36837.75 IOPS,   143.90 MiB/s
[2024-11-20T14:28:12.626Z]     36989.60 IOPS,   144.49 MiB/s
00:17:33.644                                                                                                  Latency(us)
00:17:33.644  
[2024-11-20T14:28:12.626Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:33.644  Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096)
00:17:33.644  	 xnvme_bdev          :       5.01   36945.77     144.32       0.00     0.00    1726.54      91.69   16681.89
00:17:33.644  
[2024-11-20T14:28:12.626Z]  ===================================================================================================================
00:17:33.644  
[2024-11-20T14:28:12.626Z]  Total                       :              36945.77     144.32       0.00     0.00    1726.54      91.69   16681.89
00:17:34.581  
00:17:34.581  real	0m27.019s
00:17:34.581  user	0m15.764s
00:17:34.581  sys	0m10.781s
00:17:34.581   14:28:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:34.581   14:28:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:34.581  ************************************
00:17:34.581  END TEST xnvme_bdevperf
00:17:34.581  ************************************
00:17:34.581   14:28:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:17:34.581   14:28:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:34.581   14:28:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:34.581   14:28:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:34.581  ************************************
00:17:34.581  START TEST xnvme_fio_plugin
00:17:34.581  ************************************
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:34.581    14:28:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:34.581    14:28:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:17:34.581    14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:17:34.581    14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:17:34.581    14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:34.581    14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:17:34.581   14:28:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:34.581  {
00:17:34.581    "subsystems": [
00:17:34.581      {
00:17:34.581        "subsystem": "bdev",
00:17:34.581        "config": [
00:17:34.581          {
00:17:34.581            "params": {
00:17:34.581              "io_mechanism": "io_uring_cmd",
00:17:34.581              "conserve_cpu": false,
00:17:34.581              "filename": "/dev/ng0n1",
00:17:34.581              "name": "xnvme_bdev"
00:17:34.581            },
00:17:34.581            "method": "bdev_xnvme_create"
00:17:34.581          },
00:17:34.581          {
00:17:34.581            "method": "bdev_wait_for_examine"
00:17:34.581          }
00:17:34.581        ]
00:17:34.581      }
00:17:34.581    ]
00:17:34.581  }
00:17:34.839  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:17:34.839  fio-3.35
00:17:34.839  Starting 1 thread
00:17:41.461  
00:17:41.461  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73119: Wed Nov 20 14:28:19 2024
00:17:41.461    read: IOPS=48.4k, BW=189MiB/s (198MB/s)(945MiB/5001msec)
00:17:41.461      slat (usec): min=3, max=476, avg= 4.21, stdev= 2.08
00:17:41.461      clat (usec): min=760, max=6115, avg=1155.51, stdev=221.82
00:17:41.461       lat (usec): min=764, max=6122, avg=1159.72, stdev=222.44
00:17:41.461      clat percentiles (usec):
00:17:41.461       |  1.00th=[  857],  5.00th=[  922], 10.00th=[  955], 20.00th=[ 1004],
00:17:41.461       | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1123], 60.00th=[ 1156],
00:17:41.461       | 70.00th=[ 1205], 80.00th=[ 1270], 90.00th=[ 1418], 95.00th=[ 1532],
00:17:41.461       | 99.00th=[ 1827], 99.50th=[ 1991], 99.90th=[ 2835], 99.95th=[ 4047],
00:17:41.461       | 99.99th=[ 5997]
00:17:41.461     bw (  KiB/s): min=184320, max=209408, per=100.00%, avg=195128.89, stdev=7617.64, samples=9
00:17:41.461     iops        : min=46080, max=52352, avg=48782.22, stdev=1904.41, samples=9
00:17:41.461    lat (usec)   : 1000=19.49%
00:17:41.461    lat (msec)   : 2=80.04%, 4=0.42%, 10=0.05%
00:17:41.461    cpu          : usr=43.56%, sys=55.44%, ctx=7, majf=0, minf=762
00:17:41.461    IO depths    : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:17:41.461       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:41.461       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:17:41.461       issued rwts: total=241907,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:41.461       latency   : target=0, window=0, percentile=100.00%, depth=64
00:17:41.461  
00:17:41.461  Run status group 0 (all jobs):
00:17:41.461     READ: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=945MiB (991MB), run=5001-5001msec
00:17:42.028  -----------------------------------------------------
00:17:42.028  Suppressions used:
00:17:42.028    count      bytes template
00:17:42.028        1         11 /usr/src/fio/parse.c
00:17:42.028        1          8 libtcmalloc_minimal.so
00:17:42.028        1        904 libcrypto.so
00:17:42.028  -----------------------------------------------------
00:17:42.029  
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:17:42.029    14:28:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:17:42.029    14:28:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:42.029    14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:17:42.029    14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:17:42.029    14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:17:42.029    14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:17:42.029   14:28:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:17:42.029  {
00:17:42.029    "subsystems": [
00:17:42.029      {
00:17:42.029        "subsystem": "bdev",
00:17:42.029        "config": [
00:17:42.029          {
00:17:42.029            "params": {
00:17:42.029              "io_mechanism": "io_uring_cmd",
00:17:42.029              "conserve_cpu": false,
00:17:42.029              "filename": "/dev/ng0n1",
00:17:42.029              "name": "xnvme_bdev"
00:17:42.029            },
00:17:42.029            "method": "bdev_xnvme_create"
00:17:42.029          },
00:17:42.029          {
00:17:42.029            "method": "bdev_wait_for_examine"
00:17:42.029          }
00:17:42.029        ]
00:17:42.029      }
00:17:42.029    ]
00:17:42.029  }
00:17:42.288  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:17:42.288  fio-3.35
00:17:42.288  Starting 1 thread
00:17:48.852  
00:17:48.852  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73211: Wed Nov 20 14:28:26 2024
00:17:48.852    write: IOPS=41.0k, BW=160MiB/s (168MB/s)(802MiB/5006msec); 0 zone resets
00:17:48.852      slat (nsec): min=3073, max=81250, avg=4684.71, stdev=2377.53
00:17:48.852      clat (usec): min=89, max=28246, avg=1392.78, stdev=1080.63
00:17:48.852       lat (usec): min=96, max=28250, avg=1397.47, stdev=1080.79
00:17:48.852      clat percentiles (usec):
00:17:48.852       |  1.00th=[  469],  5.00th=[  840], 10.00th=[  930], 20.00th=[ 1012],
00:17:48.852       | 30.00th=[ 1074], 40.00th=[ 1139], 50.00th=[ 1205], 60.00th=[ 1303],
00:17:48.852       | 70.00th=[ 1401], 80.00th=[ 1532], 90.00th=[ 1778], 95.00th=[ 2212],
00:17:48.852       | 99.00th=[ 6063], 99.50th=[ 8094], 99.90th=[12518], 99.95th=[26346],
00:17:48.852       | 99.99th=[27395]
00:17:48.852     bw (  KiB/s): min=62656, max=198144, per=100.00%, avg=164235.20, stdev=38734.00, samples=10
00:17:48.852     iops        : min=15664, max=49536, avg=41058.80, stdev=9683.50, samples=10
00:17:48.852    lat (usec)   : 100=0.01%, 250=0.17%, 500=0.99%, 750=2.49%, 1000=14.79%
00:17:48.852    lat (msec)   : 2=74.93%, 4=4.88%, 10=1.55%, 20=0.14%, 50=0.06%
00:17:48.852    cpu          : usr=39.54%, sys=59.34%, ctx=17, majf=0, minf=763
00:17:48.852    IO depths    : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.2%, 16=21.6%, 32=57.0%, >=64=2.5%
00:17:48.852       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:17:48.852       complete  : 0=0.0%, 4=97.9%, 8=0.2%, 16=0.2%, 32=0.3%, 64=1.4%, >=64=0.0%
00:17:48.852       issued rwts: total=0,205355,0,0 short=0,0,0,0 dropped=0,0,0,0
00:17:48.852       latency   : target=0, window=0, percentile=100.00%, depth=64
00:17:48.852  
00:17:48.852  Run status group 0 (all jobs):
00:17:48.852    WRITE: bw=160MiB/s (168MB/s), 160MiB/s-160MiB/s (168MB/s-168MB/s), io=802MiB (841MB), run=5006-5006msec
00:17:49.430  -----------------------------------------------------
00:17:49.430  Suppressions used:
00:17:49.430    count      bytes template
00:17:49.430        1         11 /usr/src/fio/parse.c
00:17:49.430        1          8 libtcmalloc_minimal.so
00:17:49.430        1        904 libcrypto.so
00:17:49.430  -----------------------------------------------------
00:17:49.430  
00:17:49.430  
00:17:49.430  real	0m14.898s
00:17:49.430  user	0m8.103s
00:17:49.430  sys	0m6.381s
00:17:49.430   14:28:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:49.430  ************************************
00:17:49.430  END TEST xnvme_fio_plugin
00:17:49.430  ************************************
00:17:49.430   14:28:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:17:49.430   14:28:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}"
00:17:49.430   14:28:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true
00:17:49.430   14:28:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true
00:17:49.430   14:28:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc
00:17:49.430   14:28:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:49.430   14:28:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:49.430   14:28:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:49.430  ************************************
00:17:49.430  START TEST xnvme_rpc
00:17:49.430  ************************************
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=()
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]=
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73297
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73297
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73297 ']'
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:49.430  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:49.430   14:28:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:49.689  [2024-11-20 14:28:28.521537] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:49.689  [2024-11-20 14:28:28.521737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73297 ]
00:17:49.948  [2024-11-20 14:28:28.700952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:49.948  [2024-11-20 14:28:28.805302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:50.938  xnvme_bdev
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name'
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]]
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename'
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]]
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism'
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]]
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu'
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]]
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73297
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73297 ']'
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73297
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:50.938    14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73297
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:50.938  killing process with pid 73297
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73297'
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73297
00:17:50.938   14:28:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73297
00:17:53.472  
00:17:53.472  real	0m3.651s
00:17:53.472  user	0m3.894s
00:17:53.472  sys	0m0.481s
00:17:53.472   14:28:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:53.472   14:28:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:53.472  ************************************
00:17:53.472  END TEST xnvme_rpc
00:17:53.472  ************************************
00:17:53.472   14:28:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf
00:17:53.472   14:28:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:17:53.472   14:28:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:53.472   14:28:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:17:53.472  ************************************
00:17:53.472  START TEST xnvme_bdevperf
00:17:53.472  ************************************
00:17:53.472   14:28:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf
00:17:53.472   14:28:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern
00:17:53.472   14:28:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd
00:17:53.472   14:28:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:17:53.472   14:28:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096
00:17:53.472    14:28:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:17:53.472    14:28:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:17:53.472    14:28:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:17:53.472  {
00:17:53.472    "subsystems": [
00:17:53.472      {
00:17:53.472        "subsystem": "bdev",
00:17:53.472        "config": [
00:17:53.472          {
00:17:53.472            "params": {
00:17:53.472              "io_mechanism": "io_uring_cmd",
00:17:53.472              "conserve_cpu": true,
00:17:53.472              "filename": "/dev/ng0n1",
00:17:53.472              "name": "xnvme_bdev"
00:17:53.472            },
00:17:53.472            "method": "bdev_xnvme_create"
00:17:53.472          },
00:17:53.472          {
00:17:53.472            "method": "bdev_wait_for_examine"
00:17:53.472          }
00:17:53.472        ]
00:17:53.472      }
00:17:53.472    ]
00:17:53.472  }
00:17:53.472  [2024-11-20 14:28:32.163532] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:17:53.472  [2024-11-20 14:28:32.163704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73375 ]
00:17:53.472  [2024-11-20 14:28:32.351749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:53.730  [2024-11-20 14:28:32.453913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:53.989  Running I/O for 5 seconds...
00:17:55.927      49280.00 IOPS,   192.50 MiB/s
[2024-11-20T14:28:35.844Z]     48640.00 IOPS,   190.00 MiB/s
[2024-11-20T14:28:36.780Z]     49322.67 IOPS,   192.67 MiB/s
[2024-11-20T14:28:38.156Z]     48848.00 IOPS,   190.81 MiB/s
00:17:59.174                                                                                                  Latency(us)
00:17:59.174  
[2024-11-20T14:28:38.156Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:17:59.174  Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096)
00:17:59.174  	 xnvme_bdev          :       5.00   49529.48     193.47       0.00     0.00    1287.91     770.79    5183.30
00:17:59.174  
[2024-11-20T14:28:38.156Z]  ===================================================================================================================
00:17:59.174  
[2024-11-20T14:28:38.156Z]  Total                       :              49529.48     193.47       0.00     0.00    1287.91     770.79    5183.30
00:18:00.110   14:28:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:00.110   14:28:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096
00:18:00.110    14:28:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:00.110    14:28:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:00.110    14:28:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:00.110  {
00:18:00.110    "subsystems": [
00:18:00.110      {
00:18:00.110        "subsystem": "bdev",
00:18:00.110        "config": [
00:18:00.110          {
00:18:00.110            "params": {
00:18:00.110              "io_mechanism": "io_uring_cmd",
00:18:00.110              "conserve_cpu": true,
00:18:00.110              "filename": "/dev/ng0n1",
00:18:00.110              "name": "xnvme_bdev"
00:18:00.110            },
00:18:00.110            "method": "bdev_xnvme_create"
00:18:00.110          },
00:18:00.110          {
00:18:00.110            "method": "bdev_wait_for_examine"
00:18:00.110          }
00:18:00.110        ]
00:18:00.110      }
00:18:00.110    ]
00:18:00.110  }
00:18:00.110  [2024-11-20 14:28:38.917429] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:00.110  [2024-11-20 14:28:38.917629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73451 ]
00:18:00.384  [2024-11-20 14:28:39.100115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:00.384  [2024-11-20 14:28:39.224927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:00.658  Running I/O for 5 seconds...
00:18:03.028      52352.00 IOPS,   204.50 MiB/s
[2024-11-20T14:28:42.574Z]     50624.00 IOPS,   197.75 MiB/s
[2024-11-20T14:28:43.951Z]     49344.00 IOPS,   192.75 MiB/s
[2024-11-20T14:28:44.891Z]     49200.00 IOPS,   192.19 MiB/s
00:18:05.909                                                                                                  Latency(us)
00:18:05.909  
[2024-11-20T14:28:44.891Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:05.909  Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096)
00:18:05.909  	 xnvme_bdev          :       5.00   49053.57     191.62       0.00     0.00    1300.09     793.13    6017.40
00:18:05.909  
[2024-11-20T14:28:44.891Z]  ===================================================================================================================
00:18:05.909  
[2024-11-20T14:28:44.891Z]  Total                       :              49053.57     191.62       0.00     0.00    1300.09     793.13    6017.40
00:18:06.844   14:28:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:06.844   14:28:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096
00:18:06.844    14:28:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:06.844    14:28:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:06.844    14:28:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:06.844  {
00:18:06.844    "subsystems": [
00:18:06.844      {
00:18:06.844        "subsystem": "bdev",
00:18:06.844        "config": [
00:18:06.844          {
00:18:06.844            "params": {
00:18:06.844              "io_mechanism": "io_uring_cmd",
00:18:06.844              "conserve_cpu": true,
00:18:06.844              "filename": "/dev/ng0n1",
00:18:06.844              "name": "xnvme_bdev"
00:18:06.844            },
00:18:06.844            "method": "bdev_xnvme_create"
00:18:06.844          },
00:18:06.844          {
00:18:06.844            "method": "bdev_wait_for_examine"
00:18:06.844          }
00:18:06.844        ]
00:18:06.844      }
00:18:06.844    ]
00:18:06.844  }
00:18:06.844  [2024-11-20 14:28:45.780047] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:06.844  [2024-11-20 14:28:45.780297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73531 ]
00:18:07.103  [2024-11-20 14:28:45.970856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:07.362  [2024-11-20 14:28:46.097977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:07.621  Running I/O for 5 seconds...
00:18:09.491      72704.00 IOPS,   284.00 MiB/s
[2024-11-20T14:28:49.847Z]     72448.00 IOPS,   283.00 MiB/s
[2024-11-20T14:28:50.783Z]     71744.00 IOPS,   280.25 MiB/s
[2024-11-20T14:28:51.717Z]     69776.00 IOPS,   272.56 MiB/s
00:18:12.735                                                                                                  Latency(us)
00:18:12.735  
[2024-11-20T14:28:51.717Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:12.735  Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096)
00:18:12.735  	 xnvme_bdev          :       5.00   69240.40     270.47       0.00     0.00     920.19     491.52    3664.06
00:18:12.735  
[2024-11-20T14:28:51.717Z]  ===================================================================================================================
00:18:12.735  
[2024-11-20T14:28:51.717Z]  Total                       :              69240.40     270.47       0.00     0.00     920.19     491.52    3664.06
00:18:13.671   14:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:13.671   14:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096
00:18:13.672    14:28:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf
00:18:13.672    14:28:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable
00:18:13.672    14:28:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:13.672  {
00:18:13.672    "subsystems": [
00:18:13.672      {
00:18:13.672        "subsystem": "bdev",
00:18:13.672        "config": [
00:18:13.672          {
00:18:13.672            "params": {
00:18:13.672              "io_mechanism": "io_uring_cmd",
00:18:13.672              "conserve_cpu": true,
00:18:13.672              "filename": "/dev/ng0n1",
00:18:13.672              "name": "xnvme_bdev"
00:18:13.672            },
00:18:13.672            "method": "bdev_xnvme_create"
00:18:13.672          },
00:18:13.672          {
00:18:13.672            "method": "bdev_wait_for_examine"
00:18:13.672          }
00:18:13.672        ]
00:18:13.672      }
00:18:13.672    ]
00:18:13.672  }
00:18:13.672  [2024-11-20 14:28:52.566555] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:13.672  [2024-11-20 14:28:52.566963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73605 ]
00:18:13.931  [2024-11-20 14:28:52.760140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:13.931  [2024-11-20 14:28:52.888516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:14.498  Running I/O for 5 seconds...
00:18:16.444      39682.00 IOPS,   155.01 MiB/s
[2024-11-20T14:28:56.360Z]     38915.00 IOPS,   152.01 MiB/s
[2024-11-20T14:28:57.295Z]     40245.00 IOPS,   157.21 MiB/s
[2024-11-20T14:28:58.669Z]     39944.25 IOPS,   156.03 MiB/s
[2024-11-20T14:28:58.669Z]     40186.20 IOPS,   156.98 MiB/s
00:18:19.687                                                                                                  Latency(us)
00:18:19.687  
[2024-11-20T14:28:58.669Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:18:19.687  Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096)
00:18:19.687  	 xnvme_bdev          :       5.01   40141.74     156.80       0.00     0.00    1588.97     139.64   21090.68
00:18:19.687  
[2024-11-20T14:28:58.669Z]  ===================================================================================================================
00:18:19.687  
[2024-11-20T14:28:58.669Z]  Total                       :              40141.74     156.80       0.00     0.00    1588.97     139.64   21090.68
00:18:20.666  ************************************
00:18:20.666  END TEST xnvme_bdevperf
00:18:20.666  ************************************
00:18:20.666  
00:18:20.666  real	0m27.288s
00:18:20.666  user	0m20.372s
00:18:20.666  sys	0m5.296s
00:18:20.666   14:28:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:20.666   14:28:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:18:20.666   14:28:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin
00:18:20.666   14:28:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:18:20.666   14:28:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:20.666   14:28:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:20.666  ************************************
00:18:20.666  START TEST xnvme_fio_plugin
00:18:20.666  ************************************
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:20.666    14:28:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:20.666    14:28:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:20.666    14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:20.666    14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:18:20.666    14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:20.666    14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:20.666   14:28:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:20.666  {
00:18:20.666    "subsystems": [
00:18:20.666      {
00:18:20.666        "subsystem": "bdev",
00:18:20.666        "config": [
00:18:20.666          {
00:18:20.666            "params": {
00:18:20.666              "io_mechanism": "io_uring_cmd",
00:18:20.666              "conserve_cpu": true,
00:18:20.666              "filename": "/dev/ng0n1",
00:18:20.666              "name": "xnvme_bdev"
00:18:20.666            },
00:18:20.666            "method": "bdev_xnvme_create"
00:18:20.666          },
00:18:20.666          {
00:18:20.666            "method": "bdev_wait_for_examine"
00:18:20.666          }
00:18:20.666        ]
00:18:20.666      }
00:18:20.666    ]
00:18:20.666  }
00:18:20.962  xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:18:20.962  fio-3.35
00:18:20.962  Starting 1 thread
00:18:27.521  
00:18:27.521  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73724: Wed Nov 20 14:29:05 2024
00:18:27.521    read: IOPS=48.8k, BW=190MiB/s (200MB/s)(953MiB/5002msec)
00:18:27.521      slat (usec): min=3, max=134, avg= 4.13, stdev= 1.64
00:18:27.521      clat (usec): min=744, max=2601, avg=1146.16, stdev=171.19
00:18:27.521       lat (usec): min=747, max=2608, avg=1150.28, stdev=171.63
00:18:27.521      clat percentiles (usec):
00:18:27.521       |  1.00th=[  857],  5.00th=[  922], 10.00th=[  955], 20.00th=[ 1012],
00:18:27.521       | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156],
00:18:27.521       | 70.00th=[ 1205], 80.00th=[ 1270], 90.00th=[ 1352], 95.00th=[ 1450],
00:18:27.521       | 99.00th=[ 1680], 99.50th=[ 1827], 99.90th=[ 2114], 99.95th=[ 2278],
00:18:27.521       | 99.99th=[ 2507]
00:18:27.521     bw (  KiB/s): min=183296, max=211456, per=100.00%, avg=196380.44, stdev=10660.52, samples=9
00:18:27.521     iops        : min=45824, max=52864, avg=49095.11, stdev=2665.13, samples=9
00:18:27.521    lat (usec)   : 750=0.01%, 1000=18.23%
00:18:27.521    lat (msec)   : 2=81.54%, 4=0.23%
00:18:27.521    cpu          : usr=74.73%, sys=22.14%, ctx=10, majf=0, minf=762
00:18:27.521    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:18:27.521       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:27.521       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0%
00:18:27.521       issued rwts: total=243904,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:27.521       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:27.521  
00:18:27.521  Run status group 0 (all jobs):
00:18:27.521     READ: bw=190MiB/s (200MB/s), 190MiB/s-190MiB/s (200MB/s-200MB/s), io=953MiB (999MB), run=5002-5002msec
00:18:28.088  -----------------------------------------------------
00:18:28.088  Suppressions used:
00:18:28.088    count      bytes template
00:18:28.088        1         11 /usr/src/fio/parse.c
00:18:28.088        1          8 libtcmalloc_minimal.so
00:18:28.088        1        904 libcrypto.so
00:18:28.088  -----------------------------------------------------
00:18:28.088  
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}"
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:28.088    14:29:06 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf
00:18:28.088    14:29:06 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:28.088    14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:28.088    14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:28.088    14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:28.088    14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:28.088   14:29:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev
00:18:28.088  {
00:18:28.088    "subsystems": [
00:18:28.088      {
00:18:28.088        "subsystem": "bdev",
00:18:28.088        "config": [
00:18:28.088          {
00:18:28.088            "params": {
00:18:28.088              "io_mechanism": "io_uring_cmd",
00:18:28.088              "conserve_cpu": true,
00:18:28.088              "filename": "/dev/ng0n1",
00:18:28.088              "name": "xnvme_bdev"
00:18:28.088            },
00:18:28.089            "method": "bdev_xnvme_create"
00:18:28.089          },
00:18:28.089          {
00:18:28.089            "method": "bdev_wait_for_examine"
00:18:28.089          }
00:18:28.089        ]
00:18:28.089      }
00:18:28.089    ]
00:18:28.089  }
00:18:28.347  xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64
00:18:28.347  fio-3.35
00:18:28.347  Starting 1 thread
00:18:34.950  
00:18:34.950  xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73816: Wed Nov 20 14:29:12 2024
00:18:34.950    write: IOPS=48.8k, BW=191MiB/s (200MB/s)(953MiB/5001msec); 0 zone resets
00:18:34.950      slat (nsec): min=2953, max=73024, avg=4211.53, stdev=1747.59
00:18:34.950      clat (usec): min=750, max=5671, avg=1144.09, stdev=198.03
00:18:34.950       lat (usec): min=754, max=5676, avg=1148.30, stdev=198.55
00:18:34.950      clat percentiles (usec):
00:18:34.950       |  1.00th=[  857],  5.00th=[  914], 10.00th=[  947], 20.00th=[  996],
00:18:34.950       | 30.00th=[ 1037], 40.00th=[ 1074], 50.00th=[ 1106], 60.00th=[ 1156],
00:18:34.950       | 70.00th=[ 1205], 80.00th=[ 1270], 90.00th=[ 1401], 95.00th=[ 1500],
00:18:34.950       | 99.00th=[ 1713], 99.50th=[ 1795], 99.90th=[ 2180], 99.95th=[ 2769],
00:18:34.950       | 99.99th=[ 5604]
00:18:34.950     bw (  KiB/s): min=180736, max=221184, per=100.00%, avg=195128.89, stdev=13134.68, samples=9
00:18:34.950     iops        : min=45184, max=55296, avg=48782.22, stdev=3283.67, samples=9
00:18:34.950    lat (usec)   : 1000=21.59%
00:18:34.950    lat (msec)   : 2=78.21%, 4=0.18%, 10=0.03%
00:18:34.950    cpu          : usr=76.00%, sys=20.90%, ctx=8, majf=0, minf=763
00:18:34.950    IO depths    : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6%
00:18:34.950       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:18:34.950       complete  : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0%
00:18:34.950       issued rwts: total=0,243968,0,0 short=0,0,0,0 dropped=0,0,0,0
00:18:34.950       latency   : target=0, window=0, percentile=100.00%, depth=64
00:18:34.950  
00:18:34.950  Run status group 0 (all jobs):
00:18:34.950    WRITE: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=953MiB (999MB), run=5001-5001msec
00:18:35.208  -----------------------------------------------------
00:18:35.208  Suppressions used:
00:18:35.208    count      bytes template
00:18:35.208        1         11 /usr/src/fio/parse.c
00:18:35.208        1          8 libtcmalloc_minimal.so
00:18:35.208        1        904 libcrypto.so
00:18:35.208  -----------------------------------------------------
00:18:35.208  
00:18:35.208  ************************************
00:18:35.208  END TEST xnvme_fio_plugin
00:18:35.208  ************************************
00:18:35.208  
00:18:35.208  real	0m14.753s
00:18:35.208  user	0m11.369s
00:18:35.208  sys	0m2.788s
00:18:35.208   14:29:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:35.208   14:29:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x
00:18:35.467  Process with pid 73297 is not found
00:18:35.467   14:29:14 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73297
00:18:35.467   14:29:14 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73297 ']'
00:18:35.467   14:29:14 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73297
00:18:35.467  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73297) - No such process
00:18:35.467   14:29:14 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73297 is not found'
00:18:35.467  ************************************
00:18:35.467  END TEST nvme_xnvme
00:18:35.467  ************************************
00:18:35.467  
00:18:35.467  real	3m47.446s
00:18:35.467  user	2m17.066s
00:18:35.467  sys	1m15.268s
00:18:35.467   14:29:14 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:35.467   14:29:14 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:35.467   14:29:14  -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme
00:18:35.467   14:29:14  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:35.467   14:29:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:35.467   14:29:14  -- common/autotest_common.sh@10 -- # set +x
00:18:35.467  ************************************
00:18:35.467  START TEST blockdev_xnvme
00:18:35.467  ************************************
00:18:35.467   14:29:14 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme
00:18:35.467  * Looking for test storage...
00:18:35.467  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:18:35.467    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:18:35.467     14:29:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:18:35.467     14:29:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version
00:18:35.467    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-:
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-:
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<'
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:35.467    14:29:14 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@345 -- # : 1
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:35.468     14:29:14 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:35.468    14:29:14 blockdev_xnvme -- scripts/common.sh@368 -- # return 0
00:18:35.468    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:35.468    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:18:35.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:35.468  		--rc genhtml_branch_coverage=1
00:18:35.468  		--rc genhtml_function_coverage=1
00:18:35.468  		--rc genhtml_legend=1
00:18:35.468  		--rc geninfo_all_blocks=1
00:18:35.468  		--rc geninfo_unexecuted_blocks=1
00:18:35.468  		
00:18:35.468  		'
00:18:35.468    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:18:35.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:35.468  		--rc genhtml_branch_coverage=1
00:18:35.468  		--rc genhtml_function_coverage=1
00:18:35.468  		--rc genhtml_legend=1
00:18:35.468  		--rc geninfo_all_blocks=1
00:18:35.468  		--rc geninfo_unexecuted_blocks=1
00:18:35.468  		
00:18:35.468  		'
00:18:35.468    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:18:35.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:35.468  		--rc genhtml_branch_coverage=1
00:18:35.468  		--rc genhtml_function_coverage=1
00:18:35.468  		--rc genhtml_legend=1
00:18:35.468  		--rc geninfo_all_blocks=1
00:18:35.468  		--rc geninfo_unexecuted_blocks=1
00:18:35.468  		
00:18:35.468  		'
00:18:35.468    14:29:14 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:18:35.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:35.468  		--rc genhtml_branch_coverage=1
00:18:35.468  		--rc genhtml_function_coverage=1
00:18:35.468  		--rc genhtml_legend=1
00:18:35.468  		--rc geninfo_all_blocks=1
00:18:35.468  		--rc geninfo_unexecuted_blocks=1
00:18:35.468  		
00:18:35.468  		'
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:18:35.468    14:29:14 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@20 -- # :
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:18:35.468    14:29:14 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device=
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek=
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx=
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]]
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]]
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73956
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:18:35.468   14:29:14 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73956
00:18:35.468   14:29:14 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73956 ']'
00:18:35.468   14:29:14 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:35.468   14:29:14 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:35.468   14:29:14 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:35.468  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:35.468   14:29:14 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:35.468   14:29:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:35.726  [2024-11-20 14:29:14.550187] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:35.726  [2024-11-20 14:29:14.550515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73956 ]
00:18:35.984  [2024-11-20 14:29:14.720730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:35.984  [2024-11-20 14:29:14.822956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:36.918   14:29:15 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:36.918   14:29:15 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0
00:18:36.918   14:29:15 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:18:36.918   14:29:15 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf
00:18:36.918   14:29:15 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring
00:18:36.918   14:29:15 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes
00:18:36.918   14:29:15 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:18:37.175  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:18:37.741  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:18:37.741  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:18:37.741  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:18:37.741  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:18:37.741   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]]
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]]
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.741   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]]
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n*
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c")
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 ))
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:37.742    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c'
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:37.742  nvme0n1
00:18:37.742  nvme0n2
00:18:37.742  nvme0n3
00:18:37.742  nvme1n1
00:18:37.742  nvme2n1
00:18:37.742  nvme3n1
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:37.742   14:29:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:37.742   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat
00:18:37.742    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:18:37.742    14:29:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:37.742    14:29:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:37.742    14:29:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:37.742    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:18:37.742    14:29:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:37.742    14:29:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:38.007    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:38.007   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:18:38.007    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:18:38.007    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:38.007    14:29:16 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:38.007   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:18:38.007    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name
00:18:38.007    14:29:16 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "nvme0n1",' '  "aliases": [' '    "bd92fc02-9e70-440a-9302-8a103a3b5c98"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "bd92fc02-9e70-440a-9302-8a103a3b5c98",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n2",' '  "aliases": [' '    "14f3e285-ef1d-45e4-8bc4-a4ed4a1b418b"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "14f3e285-ef1d-45e4-8bc4-a4ed4a1b418b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n3",' '  "aliases": [' '    "130ceb0e-eaaf-4ae5-9e15-d9f69680817c"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "130ceb0e-eaaf-4ae5-9e15-d9f69680817c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme1n1",' '  "aliases": [' '    "7e8bb7bf-0a9b-46df-80a3-4766fc5ca9e4"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "7e8bb7bf-0a9b-46df-80a3-4766fc5ca9e4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme2n1",' '  "aliases": [' '    "d339e848-8dc4-4380-8f57-ec46fffb97e6"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "d339e848-8dc4-4380-8f57-ec46fffb97e6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme3n1",' '  "aliases": [' '    "9e7b0877-2bb7-45ed-8076-5f29d505e1f7"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "9e7b0877-2bb7-45ed-8076-5f29d505e1f7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}'
00:18:38.008   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:18:38.008   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1
00:18:38.008   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:18:38.008   14:29:16 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73956
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73956 ']'
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73956
00:18:38.008    14:29:16 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:38.008    14:29:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73956
00:18:38.008  killing process with pid 73956
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73956'
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73956
00:18:38.008   14:29:16 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73956
00:18:40.543   14:29:18 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:18:40.543   14:29:18 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 ''
00:18:40.543   14:29:18 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:18:40.543   14:29:18 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:40.543   14:29:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:40.543  ************************************
00:18:40.543  START TEST bdev_hello_world
00:18:40.543  ************************************
00:18:40.543   14:29:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 ''
00:18:40.543  [2024-11-20 14:29:19.081016] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:40.543  [2024-11-20 14:29:19.081384] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74240 ]
00:18:40.543  [2024-11-20 14:29:19.264712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:40.543  [2024-11-20 14:29:19.389529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:41.110  [2024-11-20 14:29:19.797235] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:18:41.110  [2024-11-20 14:29:19.797303] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1
00:18:41.110  [2024-11-20 14:29:19.797335] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:18:41.110  [2024-11-20 14:29:19.799745] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:18:41.110  [2024-11-20 14:29:19.800163] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:18:41.110  [2024-11-20 14:29:19.800207] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:18:41.110  [2024-11-20 14:29:19.800378] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:18:41.110  
00:18:41.110  [2024-11-20 14:29:19.800417] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:18:42.046  
00:18:42.046  real	0m1.813s
00:18:42.046  user	0m1.475s
00:18:42.046  sys	0m0.219s
00:18:42.046   14:29:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:42.046  ************************************
00:18:42.046  END TEST bdev_hello_world
00:18:42.046  ************************************
00:18:42.046   14:29:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:18:42.046   14:29:20 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:18:42.046   14:29:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:42.047   14:29:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:42.047   14:29:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:42.047  ************************************
00:18:42.047  START TEST bdev_bounds
00:18:42.047  ************************************
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74278
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:18:42.047  Process bdevio pid: 74278
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74278'
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74278
00:18:42.047  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74278 ']'
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:42.047   14:29:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:42.047  [2024-11-20 14:29:20.949706] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:42.047  [2024-11-20 14:29:20.949872] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74278 ]
00:18:42.305  [2024-11-20 14:29:21.139429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:18:42.563  [2024-11-20 14:29:21.310889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:42.563  [2024-11-20 14:29:21.310962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:42.563  [2024-11-20 14:29:21.310967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:43.130   14:29:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:43.130   14:29:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:18:43.130   14:29:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:18:43.130  I/O targets:
00:18:43.130    nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB)
00:18:43.130    nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB)
00:18:43.130    nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB)
00:18:43.130    nvme1n1: 262144 blocks of 4096 bytes (1024 MiB)
00:18:43.130    nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:18:43.130    nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB)
00:18:43.130  
00:18:43.130  
00:18:43.130       CUnit - A unit testing framework for C - Version 2.1-3
00:18:43.130       http://cunit.sourceforge.net/
00:18:43.130  
00:18:43.130  
00:18:43.130  Suite: bdevio tests on: nvme3n1
00:18:43.130    Test: blockdev write read block ...passed
00:18:43.130    Test: blockdev write zeroes read block ...passed
00:18:43.130    Test: blockdev write zeroes read no split ...passed
00:18:43.390    Test: blockdev write zeroes read split ...passed
00:18:43.390    Test: blockdev write zeroes read split partial ...passed
00:18:43.390    Test: blockdev reset ...passed
00:18:43.390    Test: blockdev write read 8 blocks ...passed
00:18:43.390    Test: blockdev write read size > 128k ...passed
00:18:43.390    Test: blockdev write read invalid size ...passed
00:18:43.390    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:43.390    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:43.390    Test: blockdev write read max offset ...passed
00:18:43.390    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:43.390    Test: blockdev writev readv 8 blocks ...passed
00:18:43.390    Test: blockdev writev readv 30 x 1block ...passed
00:18:43.390    Test: blockdev writev readv block ...passed
00:18:43.390    Test: blockdev writev readv size > 128k ...passed
00:18:43.390    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:43.390    Test: blockdev comparev and writev ...passed
00:18:43.390    Test: blockdev nvme passthru rw ...passed
00:18:43.390    Test: blockdev nvme passthru vendor specific ...passed
00:18:43.390    Test: blockdev nvme admin passthru ...passed
00:18:43.390    Test: blockdev copy ...passed
00:18:43.390  Suite: bdevio tests on: nvme2n1
00:18:43.390    Test: blockdev write read block ...passed
00:18:43.390    Test: blockdev write zeroes read block ...passed
00:18:43.390    Test: blockdev write zeroes read no split ...passed
00:18:43.390    Test: blockdev write zeroes read split ...passed
00:18:43.390    Test: blockdev write zeroes read split partial ...passed
00:18:43.390    Test: blockdev reset ...passed
00:18:43.390    Test: blockdev write read 8 blocks ...passed
00:18:43.390    Test: blockdev write read size > 128k ...passed
00:18:43.390    Test: blockdev write read invalid size ...passed
00:18:43.390    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:43.390    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:43.390    Test: blockdev write read max offset ...passed
00:18:43.390    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:43.390    Test: blockdev writev readv 8 blocks ...passed
00:18:43.390    Test: blockdev writev readv 30 x 1block ...passed
00:18:43.390    Test: blockdev writev readv block ...passed
00:18:43.390    Test: blockdev writev readv size > 128k ...passed
00:18:43.390    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:43.390    Test: blockdev comparev and writev ...passed
00:18:43.390    Test: blockdev nvme passthru rw ...passed
00:18:43.390    Test: blockdev nvme passthru vendor specific ...passed
00:18:43.390    Test: blockdev nvme admin passthru ...passed
00:18:43.390    Test: blockdev copy ...passed
00:18:43.390  Suite: bdevio tests on: nvme1n1
00:18:43.390    Test: blockdev write read block ...passed
00:18:43.390    Test: blockdev write zeroes read block ...passed
00:18:43.390    Test: blockdev write zeroes read no split ...passed
00:18:43.390    Test: blockdev write zeroes read split ...passed
00:18:43.390    Test: blockdev write zeroes read split partial ...passed
00:18:43.390    Test: blockdev reset ...passed
00:18:43.390    Test: blockdev write read 8 blocks ...passed
00:18:43.390    Test: blockdev write read size > 128k ...passed
00:18:43.390    Test: blockdev write read invalid size ...passed
00:18:43.390    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:43.390    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:43.390    Test: blockdev write read max offset ...passed
00:18:43.390    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:43.390    Test: blockdev writev readv 8 blocks ...passed
00:18:43.390    Test: blockdev writev readv 30 x 1block ...passed
00:18:43.390    Test: blockdev writev readv block ...passed
00:18:43.390    Test: blockdev writev readv size > 128k ...passed
00:18:43.390    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:43.390    Test: blockdev comparev and writev ...passed
00:18:43.390    Test: blockdev nvme passthru rw ...passed
00:18:43.390    Test: blockdev nvme passthru vendor specific ...passed
00:18:43.390    Test: blockdev nvme admin passthru ...passed
00:18:43.390    Test: blockdev copy ...passed
00:18:43.390  Suite: bdevio tests on: nvme0n3
00:18:43.390    Test: blockdev write read block ...passed
00:18:43.390    Test: blockdev write zeroes read block ...passed
00:18:43.390    Test: blockdev write zeroes read no split ...passed
00:18:43.390    Test: blockdev write zeroes read split ...passed
00:18:43.648    Test: blockdev write zeroes read split partial ...passed
00:18:43.648    Test: blockdev reset ...passed
00:18:43.648    Test: blockdev write read 8 blocks ...passed
00:18:43.648    Test: blockdev write read size > 128k ...passed
00:18:43.648    Test: blockdev write read invalid size ...passed
00:18:43.648    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:43.648    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:43.648    Test: blockdev write read max offset ...passed
00:18:43.648    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:43.648    Test: blockdev writev readv 8 blocks ...passed
00:18:43.648    Test: blockdev writev readv 30 x 1block ...passed
00:18:43.648    Test: blockdev writev readv block ...passed
00:18:43.648    Test: blockdev writev readv size > 128k ...passed
00:18:43.648    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:43.648    Test: blockdev comparev and writev ...passed
00:18:43.648    Test: blockdev nvme passthru rw ...passed
00:18:43.648    Test: blockdev nvme passthru vendor specific ...passed
00:18:43.648    Test: blockdev nvme admin passthru ...passed
00:18:43.648    Test: blockdev copy ...passed
00:18:43.648  Suite: bdevio tests on: nvme0n2
00:18:43.648    Test: blockdev write read block ...passed
00:18:43.648    Test: blockdev write zeroes read block ...passed
00:18:43.648    Test: blockdev write zeroes read no split ...passed
00:18:43.648    Test: blockdev write zeroes read split ...passed
00:18:43.648    Test: blockdev write zeroes read split partial ...passed
00:18:43.648    Test: blockdev reset ...passed
00:18:43.648    Test: blockdev write read 8 blocks ...passed
00:18:43.648    Test: blockdev write read size > 128k ...passed
00:18:43.648    Test: blockdev write read invalid size ...passed
00:18:43.648    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:43.648    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:43.648    Test: blockdev write read max offset ...passed
00:18:43.648    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:43.648    Test: blockdev writev readv 8 blocks ...passed
00:18:43.648    Test: blockdev writev readv 30 x 1block ...passed
00:18:43.648    Test: blockdev writev readv block ...passed
00:18:43.648    Test: blockdev writev readv size > 128k ...passed
00:18:43.648    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:43.648    Test: blockdev comparev and writev ...passed
00:18:43.648    Test: blockdev nvme passthru rw ...passed
00:18:43.648    Test: blockdev nvme passthru vendor specific ...passed
00:18:43.649    Test: blockdev nvme admin passthru ...passed
00:18:43.649    Test: blockdev copy ...passed
00:18:43.649  Suite: bdevio tests on: nvme0n1
00:18:43.649    Test: blockdev write read block ...passed
00:18:43.649    Test: blockdev write zeroes read block ...passed
00:18:43.649    Test: blockdev write zeroes read no split ...passed
00:18:43.649    Test: blockdev write zeroes read split ...passed
00:18:43.649    Test: blockdev write zeroes read split partial ...passed
00:18:43.649    Test: blockdev reset ...passed
00:18:43.649    Test: blockdev write read 8 blocks ...passed
00:18:43.649    Test: blockdev write read size > 128k ...passed
00:18:43.649    Test: blockdev write read invalid size ...passed
00:18:43.649    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:18:43.649    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:18:43.649    Test: blockdev write read max offset ...passed
00:18:43.649    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:18:43.649    Test: blockdev writev readv 8 blocks ...passed
00:18:43.649    Test: blockdev writev readv 30 x 1block ...passed
00:18:43.649    Test: blockdev writev readv block ...passed
00:18:43.649    Test: blockdev writev readv size > 128k ...passed
00:18:43.649    Test: blockdev writev readv size > 128k in two iovs ...passed
00:18:43.649    Test: blockdev comparev and writev ...passed
00:18:43.649    Test: blockdev nvme passthru rw ...passed
00:18:43.649    Test: blockdev nvme passthru vendor specific ...passed
00:18:43.649    Test: blockdev nvme admin passthru ...passed
00:18:43.649    Test: blockdev copy ...passed
00:18:43.649  
00:18:43.649  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:18:43.649                suites      6      6    n/a      0        0
00:18:43.649                 tests    138    138    138      0        0
00:18:43.649               asserts    780    780    780      0      n/a
00:18:43.649  
00:18:43.649  Elapsed time =    1.322 seconds
00:18:43.649  0
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74278
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74278 ']'
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74278
00:18:43.649    14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:43.649    14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74278
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:43.649  killing process with pid 74278
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74278'
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74278
00:18:43.649   14:29:22 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74278
00:18:45.025  ************************************
00:18:45.025  END TEST bdev_bounds
00:18:45.025  ************************************
00:18:45.025   14:29:23 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:18:45.025  
00:18:45.025  real	0m2.769s
00:18:45.025  user	0m6.940s
00:18:45.025  sys	0m0.384s
00:18:45.025   14:29:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:45.025   14:29:23 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:18:45.025   14:29:23 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' ''
00:18:45.025   14:29:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:18:45.025   14:29:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:45.025   14:29:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:45.025  ************************************
00:18:45.025  START TEST bdev_nbd
00:18:45.025  ************************************
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' ''
00:18:45.025    14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:18:45.025  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74346
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74346 /var/tmp/spdk-nbd.sock
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74346 ']'
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:45.025   14:29:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:45.025  [2024-11-20 14:29:23.772609] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:18:45.025  [2024-11-20 14:29:23.773378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:45.025  [2024-11-20 14:29:23.945051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:45.303  [2024-11-20 14:29:24.050713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1'
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1'
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:18:45.896   14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:45.896    14:29:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:18:46.463    14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:46.463  1+0 records in
00:18:46.463  1+0 records out
00:18:46.463  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567832 s, 7.2 MB/s
00:18:46.463    14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:46.463   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:46.463    14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:18:46.721    14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:46.721  1+0 records in
00:18:46.721  1+0 records out
00:18:46.721  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486848 s, 8.4 MB/s
00:18:46.721    14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:46.721   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:46.721    14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:18:46.979    14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:46.979  1+0 records in
00:18:46.979  1+0 records out
00:18:46.979  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513328 s, 8.0 MB/s
00:18:46.979    14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:46.979   14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:46.979    14:29:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:18:47.237    14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:47.237  1+0 records in
00:18:47.237  1+0 records out
00:18:47.237  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000531939 s, 7.7 MB/s
00:18:47.237    14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:47.237   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:47.238   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:47.238   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:47.238    14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:18:47.495    14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:47.495  1+0 records in
00:18:47.495  1+0 records out
00:18:47.495  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644946 s, 6.4 MB/s
00:18:47.495    14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:47.495   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:47.495    14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:18:48.061    14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:48.061  1+0 records in
00:18:48.061  1+0 records out
00:18:48.061  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570918 s, 7.2 MB/s
00:18:48.061    14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:18:48.061   14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 ))
00:18:48.061    14:29:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:48.061   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd0",
00:18:48.061      "bdev_name": "nvme0n1"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd1",
00:18:48.061      "bdev_name": "nvme0n2"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd2",
00:18:48.061      "bdev_name": "nvme0n3"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd3",
00:18:48.061      "bdev_name": "nvme1n1"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd4",
00:18:48.061      "bdev_name": "nvme2n1"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd5",
00:18:48.061      "bdev_name": "nvme3n1"
00:18:48.061    }
00:18:48.061  ]'
00:18:48.061   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:18:48.061    14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd0",
00:18:48.061      "bdev_name": "nvme0n1"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd1",
00:18:48.061      "bdev_name": "nvme0n2"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd2",
00:18:48.061      "bdev_name": "nvme0n3"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd3",
00:18:48.061      "bdev_name": "nvme1n1"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd4",
00:18:48.061      "bdev_name": "nvme2n1"
00:18:48.061    },
00:18:48.061    {
00:18:48.061      "nbd_device": "/dev/nbd5",
00:18:48.061      "bdev_name": "nvme3n1"
00:18:48.061    }
00:18:48.061  ]'
00:18:48.061    14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:18:48.319   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5'
00:18:48.319   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:48.320   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5')
00:18:48.320   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:48.320   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:48.320   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:48.320   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:48.578    14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:48.578   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:18:48.837    14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:48.837   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:18:49.095    14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:49.095   14:29:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:18:49.354    14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:49.354   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:18:49.613    14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:49.613   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:18:49.872    14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:49.872   14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:49.872    14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:49.872    14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:49.872     14:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:50.438    14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:50.438     14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:50.438     14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:50.438    14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:50.438     14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:50.438     14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:50.438     14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:50.438    14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:50.438    14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1')
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:50.438   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0
00:18:50.696  /dev/nbd0
00:18:50.696    14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:50.696   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:50.697  1+0 records in
00:18:50.697  1+0 records out
00:18:50.697  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403684 s, 10.1 MB/s
00:18:50.697    14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:50.697   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1
00:18:50.955  /dev/nbd1
00:18:50.955    14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:50.955  1+0 records in
00:18:50.955  1+0 records out
00:18:50.955  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448568 s, 9.1 MB/s
00:18:50.955    14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:50.955   14:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10
00:18:51.214  /dev/nbd10
00:18:51.214    14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:51.214  1+0 records in
00:18:51.214  1+0 records out
00:18:51.214  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573387 s, 7.1 MB/s
00:18:51.214    14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:51.214   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11
00:18:51.780  /dev/nbd11
00:18:51.780    14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:51.780   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:51.780  1+0 records in
00:18:51.780  1+0 records out
00:18:51.780  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580726 s, 7.1 MB/s
00:18:51.780    14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:51.781   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12
00:18:52.039  /dev/nbd12
00:18:52.039    14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:52.039  1+0 records in
00:18:52.039  1+0 records out
00:18:52.039  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058339 s, 7.0 MB/s
00:18:52.039    14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:52.039   14:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13
00:18:52.297  /dev/nbd13
00:18:52.298    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:18:52.298  1+0 records in
00:18:52.298  1+0 records out
00:18:52.298  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867249 s, 4.7 MB/s
00:18:52.298    14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:18:52.298   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 ))
00:18:52.298    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:52.298    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:52.298     14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:52.864    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:18:52.864    {
00:18:52.864      "nbd_device": "/dev/nbd0",
00:18:52.864      "bdev_name": "nvme0n1"
00:18:52.864    },
00:18:52.864    {
00:18:52.864      "nbd_device": "/dev/nbd1",
00:18:52.864      "bdev_name": "nvme0n2"
00:18:52.864    },
00:18:52.864    {
00:18:52.864      "nbd_device": "/dev/nbd10",
00:18:52.864      "bdev_name": "nvme0n3"
00:18:52.864    },
00:18:52.864    {
00:18:52.864      "nbd_device": "/dev/nbd11",
00:18:52.864      "bdev_name": "nvme1n1"
00:18:52.864    },
00:18:52.864    {
00:18:52.864      "nbd_device": "/dev/nbd12",
00:18:52.864      "bdev_name": "nvme2n1"
00:18:52.864    },
00:18:52.864    {
00:18:52.865      "nbd_device": "/dev/nbd13",
00:18:52.865      "bdev_name": "nvme3n1"
00:18:52.865    }
00:18:52.865  ]'
00:18:52.865     14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:18:52.865    {
00:18:52.865      "nbd_device": "/dev/nbd0",
00:18:52.865      "bdev_name": "nvme0n1"
00:18:52.865    },
00:18:52.865    {
00:18:52.865      "nbd_device": "/dev/nbd1",
00:18:52.865      "bdev_name": "nvme0n2"
00:18:52.865    },
00:18:52.865    {
00:18:52.865      "nbd_device": "/dev/nbd10",
00:18:52.865      "bdev_name": "nvme0n3"
00:18:52.865    },
00:18:52.865    {
00:18:52.865      "nbd_device": "/dev/nbd11",
00:18:52.865      "bdev_name": "nvme1n1"
00:18:52.865    },
00:18:52.865    {
00:18:52.865      "nbd_device": "/dev/nbd12",
00:18:52.865      "bdev_name": "nvme2n1"
00:18:52.865    },
00:18:52.865    {
00:18:52.865      "nbd_device": "/dev/nbd13",
00:18:52.865      "bdev_name": "nvme3n1"
00:18:52.865    }
00:18:52.865  ]'
00:18:52.865     14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:52.865    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:18:52.865  /dev/nbd1
00:18:52.865  /dev/nbd10
00:18:52.865  /dev/nbd11
00:18:52.865  /dev/nbd12
00:18:52.865  /dev/nbd13'
00:18:52.865     14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:18:52.865  /dev/nbd1
00:18:52.865  /dev/nbd10
00:18:52.865  /dev/nbd11
00:18:52.865  /dev/nbd12
00:18:52.865  /dev/nbd13'
00:18:52.865     14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:52.865    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6
00:18:52.865    14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']'
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:18:52.865  256+0 records in
00:18:52.865  256+0 records out
00:18:52.865  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719657 s, 146 MB/s
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:18:52.865  256+0 records in
00:18:52.865  256+0 records out
00:18:52.865  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120536 s, 8.7 MB/s
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:52.865   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:18:53.123  256+0 records in
00:18:53.123  256+0 records out
00:18:53.123  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123271 s, 8.5 MB/s
00:18:53.123   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:53.123   14:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:18:53.123  256+0 records in
00:18:53.123  256+0 records out
00:18:53.123  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128375 s, 8.2 MB/s
00:18:53.123   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:53.123   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:18:53.381  256+0 records in
00:18:53.381  256+0 records out
00:18:53.381  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123327 s, 8.5 MB/s
00:18:53.381   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:53.381   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:18:53.381  256+0 records in
00:18:53.381  256+0 records out
00:18:53.381  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118228 s, 8.9 MB/s
00:18:53.381   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:18:53.381   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:18:53.639  256+0 records in
00:18:53.640  256+0 records out
00:18:53.640  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140817 s, 7.4 MB/s
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13'
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13')
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:53.640   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:53.898    14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:53.898   14:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:18:54.466    14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:54.466   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:18:54.724    14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:54.724   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:18:54.982    14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:54.982   14:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:18:55.241    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:55.241   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:18:55.500    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:55.500   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:55.758    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:18:55.758    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:55.758     14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:18:56.017    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:18:56.017     14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:18:56.017     14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:18:56.017    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:18:56.017     14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:18:56.017     14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:18:56.017     14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:18:56.017    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:18:56.017    14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:18:56.017   14:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:18:56.275  malloc_lvol_verify
00:18:56.275   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:18:56.533  5ddfc2d8-31b3-404f-ba67-bc66e2ae3500
00:18:56.533   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:18:56.792  ddce3148-8cad-40f7-9840-8db889303f37
00:18:56.792   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:18:57.050  /dev/nbd0
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:18:57.050  mke2fs 1.47.0 (5-Feb-2023)
00:18:57.050  Discarding device blocks:    0/4096         done                            
00:18:57.050  Creating filesystem with 4096 1k blocks and 1024 inodes
00:18:57.050  
00:18:57.050  Allocating group tables: 0/1   done                            
00:18:57.050  Writing inode tables: 0/1   done                            
00:18:57.050  Creating journal (1024 blocks): done
00:18:57.050  Writing superblocks and filesystem accounting information: 0/1   done
00:18:57.050  
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:18:57.050   14:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:18:57.308    14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:18:57.308   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:18:57.308   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:18:57.308   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:18:57.308   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:18:57.308   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74346
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74346 ']'
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74346
00:18:57.567    14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:57.567    14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74346
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:57.567  killing process with pid 74346
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74346'
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74346
00:18:57.567   14:29:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74346
00:18:58.503   14:29:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:18:58.503  
00:18:58.503  real	0m13.708s
00:18:58.503  user	0m19.943s
00:18:58.503  sys	0m4.371s
00:18:58.503   14:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:58.503   14:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:18:58.503  ************************************
00:18:58.503  END TEST bdev_nbd
00:18:58.503  ************************************
00:18:58.503   14:29:37 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:18:58.503   14:29:37 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']'
00:18:58.503   14:29:37 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']'
00:18:58.503   14:29:37 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite ''
00:18:58.503   14:29:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:58.503   14:29:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:58.503   14:29:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:18:58.503  ************************************
00:18:58.503  START TEST bdev_fio
00:18:58.503  ************************************
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:18:58.503  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:18:58.503    14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:18:58.503    14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:18:58.503    14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:58.503   14:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:18:58.762  ************************************
00:18:58.762  START TEST bdev_fio_rw_verify
00:18:58.762  ************************************
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:18:58.762    14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:18:58.762    14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:18:58.762    14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:18:58.762   14:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:18:58.762  job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:58.762  job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:58.762  job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:58.762  job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:58.762  job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:58.762  job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:18:58.762  fio-3.35
00:18:58.762  Starting 6 threads
00:19:10.984  
00:19:10.984  job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74782: Wed Nov 20 14:29:48 2024
00:19:10.984    read: IOPS=27.4k, BW=107MiB/s (112MB/s)(1072MiB/10001msec)
00:19:10.984      slat (usec): min=3, max=763, avg= 7.65, stdev= 4.79
00:19:10.984      clat (usec): min=131, max=576868, avg=683.60, stdev=3122.08
00:19:10.984       lat (usec): min=136, max=576875, avg=691.25, stdev=3122.13
00:19:10.984      clat percentiles (usec):
00:19:10.984       | 50.000th=[   676], 99.000th=[  1270], 99.900th=[  3163],
00:19:10.984       | 99.990th=[  5866], 99.999th=[574620]
00:19:10.984    write: IOPS=27.8k, BW=109MiB/s (114MB/s)(1087MiB/10001msec); 0 zone resets
00:19:10.985      slat (usec): min=14, max=3066, avg=29.15, stdev=28.76
00:19:10.985      clat (usec): min=99, max=6384, avg=740.70, stdev=272.97
00:19:10.985       lat (usec): min=119, max=6681, avg=769.85, stdev=275.66
00:19:10.985      clat percentiles (usec):
00:19:10.985       | 50.000th=[  742], 99.000th=[ 1385], 99.900th=[ 3195], 99.990th=[ 5604],
00:19:10.985       | 99.999th=[ 6325]
00:19:10.985     bw (  KiB/s): min=96343, max=137672, per=100.00%, avg=112769.71, stdev=2270.77, samples=113
00:19:10.985     iops        : min=24085, max=34418, avg=28192.12, stdev=567.68, samples=113
00:19:10.985    lat (usec)   : 100=0.01%, 250=2.36%, 500=19.03%, 750=35.67%, 1000=34.09%
00:19:10.985    lat (msec)   : 2=8.63%, 4=0.17%, 10=0.06%, 750=0.01%
00:19:10.985    cpu          : usr=61.08%, sys=25.78%, ctx=7162, majf=0, minf=23659
00:19:10.985    IO depths    : 1=12.3%, 2=24.9%, 4=50.1%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0%
00:19:10.985       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:10.985       complete  : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:10.985       issued rwts: total=274451,278316,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:10.985       latency   : target=0, window=0, percentile=100.00%, depth=8
00:19:10.985  
00:19:10.985  Run status group 0 (all jobs):
00:19:10.985     READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=1072MiB (1124MB), run=10001-10001msec
00:19:10.985    WRITE: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=1087MiB (1140MB), run=10001-10001msec
00:19:10.985  -----------------------------------------------------
00:19:10.985  Suppressions used:
00:19:10.985    count      bytes template
00:19:10.985        6         48 /usr/src/fio/parse.c
00:19:10.985     3681     353376 /usr/src/fio/iolog.c
00:19:10.985        1          8 libtcmalloc_minimal.so
00:19:10.985        1        904 libcrypto.so
00:19:10.985  -----------------------------------------------------
00:19:10.985  
00:19:10.985  
00:19:10.985  real	0m12.443s
00:19:10.985  user	0m38.615s
00:19:10.985  sys	0m15.812s
00:19:10.985   14:29:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:10.985  ************************************
00:19:10.985  END TEST bdev_fio_rw_verify
00:19:10.985  ************************************
00:19:10.985   14:29:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:19:10.985   14:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:19:11.244   14:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:19:11.244    14:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:19:11.244    14:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "nvme0n1",' '  "aliases": [' '    "bd92fc02-9e70-440a-9302-8a103a3b5c98"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "bd92fc02-9e70-440a-9302-8a103a3b5c98",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n2",' '  "aliases": [' '    "14f3e285-ef1d-45e4-8bc4-a4ed4a1b418b"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "14f3e285-ef1d-45e4-8bc4-a4ed4a1b418b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme0n3",' '  "aliases": [' '    "130ceb0e-eaaf-4ae5-9e15-d9f69680817c"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1048576,' '  "uuid": "130ceb0e-eaaf-4ae5-9e15-d9f69680817c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme1n1",' '  "aliases": [' '    "7e8bb7bf-0a9b-46df-80a3-4766fc5ca9e4"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 262144,' '  "uuid": "7e8bb7bf-0a9b-46df-80a3-4766fc5ca9e4",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme2n1",' '  "aliases": [' '    "d339e848-8dc4-4380-8f57-ec46fffb97e6"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "d339e848-8dc4-4380-8f57-ec46fffb97e6",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}' '{' '  "name": "nvme3n1",' '  "aliases": [' '    "9e7b0877-2bb7-45ed-8076-5f29d505e1f7"' '  ],' '  "product_name": "xNVMe bdev",' '  "block_size": 4096,' '  "num_blocks": 1548666,' '  "uuid": "9e7b0877-2bb7-45ed-8076-5f29d505e1f7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": false,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {}' '}'
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]]
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:19:11.244  /home/vagrant/spdk_repo/spdk
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0
00:19:11.244  
00:19:11.244  real	0m12.618s
00:19:11.244  user	0m38.720s
00:19:11.244  sys	0m15.885s
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:11.244   14:29:50 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:19:11.244  ************************************
00:19:11.244  END TEST bdev_fio
00:19:11.244  ************************************
00:19:11.244   14:29:50 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:19:11.244   14:29:50 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:19:11.244   14:29:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:19:11.244   14:29:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:11.244   14:29:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:11.244  ************************************
00:19:11.244  START TEST bdev_verify
00:19:11.244  ************************************
00:19:11.245   14:29:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:19:11.245  [2024-11-20 14:29:50.165450] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:11.245  [2024-11-20 14:29:50.166185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74952 ]
00:19:11.503  [2024-11-20 14:29:50.366928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:19:11.761  [2024-11-20 14:29:50.502820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:11.761  [2024-11-20 14:29:50.502831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:12.048  Running I/O for 5 seconds...
00:19:14.357      21632.00 IOPS,    84.50 MiB/s
[2024-11-20T14:29:54.273Z]     21840.00 IOPS,    85.31 MiB/s
[2024-11-20T14:29:55.207Z]     22410.67 IOPS,    87.54 MiB/s
[2024-11-20T14:29:56.142Z]     21872.00 IOPS,    85.44 MiB/s
[2024-11-20T14:29:56.142Z]     21760.00 IOPS,    85.00 MiB/s
00:19:17.160                                                                                                  Latency(us)
00:19:17.160  
[2024-11-20T14:29:56.142Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:17.160  Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x0 length 0x80000
00:19:17.160  	 nvme0n1             :       5.05    1646.77       6.43       0.00     0.00   77591.58    7864.32   80073.08
00:19:17.160  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x80000 length 0x80000
00:19:17.160  	 nvme0n1             :       5.06    1594.97       6.23       0.00     0.00   80095.86    6762.12  125829.12
00:19:17.160  Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x0 length 0x80000
00:19:17.160  	 nvme0n2             :       5.05    1646.16       6.43       0.00     0.00   77482.55   10783.65   81502.95
00:19:17.160  Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x80000 length 0x80000
00:19:17.160  	 nvme0n2             :       5.06    1568.86       6.13       0.00     0.00   81283.54   12630.57   96278.34
00:19:17.160  Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x0 length 0x80000
00:19:17.160  	 nvme0n3             :       5.07    1642.08       6.41       0.00     0.00   77548.29   17158.52   80549.70
00:19:17.160  Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x80000 length 0x80000
00:19:17.160  	 nvme0n3             :       5.06    1568.28       6.13       0.00     0.00   81136.86   15490.33  105334.23
00:19:17.160  Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x0 length 0x20000
00:19:17.160  	 nvme1n1             :       5.07    1641.47       6.41       0.00     0.00   77455.64   11856.06   81502.95
00:19:17.160  Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x20000 length 0x20000
00:19:17.160  	 nvme1n1             :       5.08    1586.45       6.20       0.00     0.00   80057.50    9532.51  129642.12
00:19:17.160  Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x0 length 0xa0000
00:19:17.160  	 nvme2n1             :       5.06    1645.37       6.43       0.00     0.00   77127.82    8936.73   74353.57
00:19:17.160  Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0xa0000 length 0xa0000
00:19:17.160  	 nvme2n1             :       5.08    1585.88       6.19       0.00     0.00   79935.06    8102.63  112483.61
00:19:17.160  Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0x0 length 0xbd0bd
00:19:17.160  	 nvme3n1             :       5.08    2704.95      10.57       0.00     0.00   46800.31    3202.33   70063.94
00:19:17.160  Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:19:17.160  	 Verification LBA range: start 0xbd0bd length 0xbd0bd
00:19:17.160  	 nvme3n1             :       5.08    2690.58      10.51       0.00     0.00   46976.94    3678.95   96278.34
00:19:17.160  
[2024-11-20T14:29:56.142Z]  ===================================================================================================================
00:19:17.160  
[2024-11-20T14:29:56.142Z]  Total                       :              21521.82      84.07       0.00     0.00   70889.83    3202.33  129642.12
00:19:18.554  
00:19:18.554  real	0m7.123s
00:19:18.554  user	0m11.367s
00:19:18.554  sys	0m1.708s
00:19:18.554   14:29:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:18.554   14:29:57 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:19:18.554  ************************************
00:19:18.554  END TEST bdev_verify
00:19:18.554  ************************************
00:19:18.554   14:29:57 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:19:18.554   14:29:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:19:18.554   14:29:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:18.554   14:29:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:18.554  ************************************
00:19:18.554  START TEST bdev_verify_big_io
00:19:18.554  ************************************
00:19:18.554   14:29:57 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:19:18.554  [2024-11-20 14:29:57.326408] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:18.554  [2024-11-20 14:29:57.326550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75051 ]
00:19:18.554  [2024-11-20 14:29:57.502705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:19:18.812  [2024-11-20 14:29:57.635282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:18.812  [2024-11-20 14:29:57.635294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:19.377  Running I/O for 5 seconds...
00:19:25.482       1960.00 IOPS,   122.50 MiB/s
[2024-11-20T14:30:04.464Z]      3316.00 IOPS,   207.25 MiB/s
00:19:25.482                                                                                                  Latency(us)
00:19:25.482  
[2024-11-20T14:30:04.464Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:25.482  Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x0 length 0x8000
00:19:25.482  	 nvme0n1             :       6.06     124.14       7.76       0.00     0.00 1010353.03   18469.24 1220161.16
00:19:25.482  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x8000 length 0x8000
00:19:25.482  	 nvme0n1             :       6.02     114.34       7.15       0.00     0.00 1058519.84   35031.97 1273543.21
00:19:25.482  Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x0 length 0x8000
00:19:25.482  	 nvme0n2             :       6.04     103.28       6.46       0.00     0.00 1182872.10   26095.24 1929379.84
00:19:25.482  Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x8000 length 0x8000
00:19:25.482  	 nvme0n2             :       5.99      88.13       5.51       0.00     0.00 1269249.07  181117.67  941811.90
00:19:25.482  Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x0 length 0x8000
00:19:25.482  	 nvme0n3             :       6.05     156.16       9.76       0.00     0.00  759046.36   24784.52  808356.77
00:19:25.482  Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x8000 length 0x8000
00:19:25.482  	 nvme0n3             :       6.02     142.20       8.89       0.00     0.00  812585.15   26571.87  846486.81
00:19:25.482  Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x0 length 0x2000
00:19:25.482  	 nvme1n1             :       6.05     148.14       9.26       0.00     0.00  769061.24   24069.59  743535.71
00:19:25.482  Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x2000 length 0x2000
00:19:25.482  	 nvme1n1             :       6.01     130.39       8.15       0.00     0.00  885586.36   12392.26  873177.83
00:19:25.482  Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x0 length 0xa000
00:19:25.482  	 nvme2n1             :       6.05      95.18       5.95       0.00     0.00 1158137.17   22163.08 2531834.41
00:19:25.482  Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0xa000 length 0xa000
00:19:25.482  	 nvme2n1             :       6.02     146.12       9.13       0.00     0.00  765931.69    7626.01 1746355.67
00:19:25.482  Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0x0 length 0xbd0b
00:19:25.482  	 nvme3n1             :       6.06      79.26       4.95       0.00     0.00 1345953.17    7626.01 3263931.11
00:19:25.482  Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:19:25.482  	 Verification LBA range: start 0xbd0b length 0xbd0b
00:19:25.482  	 nvme3n1             :       6.02     127.49       7.97       0.00     0.00  846744.59    9889.98  991380.95
00:19:25.482  
[2024-11-20T14:30:04.464Z]  ===================================================================================================================
00:19:25.482  
[2024-11-20T14:30:04.464Z]  Total                       :               1454.82      90.93       0.00     0.00  949097.35    7626.01 3263931.11
00:19:26.859  
00:19:26.859  real	0m8.263s
00:19:26.859  user	0m15.124s
00:19:26.859  sys	0m0.458s
00:19:26.859   14:30:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:26.859   14:30:05 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:19:26.859  ************************************
00:19:26.859  END TEST bdev_verify_big_io
00:19:26.859  ************************************
00:19:26.859   14:30:05 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:26.859   14:30:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:26.859   14:30:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:26.859   14:30:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:26.859  ************************************
00:19:26.859  START TEST bdev_write_zeroes
00:19:26.859  ************************************
00:19:26.859   14:30:05 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:26.859  [2024-11-20 14:30:05.640452] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:26.859  [2024-11-20 14:30:05.640614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75161 ]
00:19:26.859  [2024-11-20 14:30:05.819867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:27.117  [2024-11-20 14:30:05.955980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:27.685  Running I/O for 1 seconds...
00:19:28.621      66176.00 IOPS,   258.50 MiB/s
00:19:28.621                                                                                                  Latency(us)
00:19:28.621  
[2024-11-20T14:30:07.603Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:28.621  Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:28.621  	 nvme0n1             :       1.02   10068.44      39.33       0.00     0.00   12699.37    7745.16   22401.40
00:19:28.621  Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:28.621  	 nvme0n2             :       1.02   10052.83      39.27       0.00     0.00   12709.25    7745.16   22878.02
00:19:28.621  Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:28.621  	 nvme0n3             :       1.02   10040.33      39.22       0.00     0.00   12714.72    7745.16   22997.18
00:19:28.621  Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:28.621  	 nvme1n1             :       1.02   10027.56      39.17       0.00     0.00   12720.80    7745.16   23235.49
00:19:28.621  Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:28.621  	 nvme2n1             :       1.02   10015.10      39.12       0.00     0.00   12725.61    7745.16   23354.65
00:19:28.621  Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:19:28.621  	 nvme3n1             :       1.03   15390.68      60.12       0.00     0.00    8259.13    3023.59   23235.49
00:19:28.621  
[2024-11-20T14:30:07.603Z]  ===================================================================================================================
00:19:28.621  
[2024-11-20T14:30:07.603Z]  Total                       :              65594.94     256.23       0.00     0.00   11663.00    3023.59   23354.65
00:19:29.556  
00:19:29.556  real	0m2.911s
00:19:29.556  user	0m2.151s
00:19:29.556  sys	0m0.583s
00:19:29.556   14:30:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:29.556   14:30:08 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:19:29.556  ************************************
00:19:29.556  END TEST bdev_write_zeroes
00:19:29.556  ************************************
00:19:29.556   14:30:08 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:29.556   14:30:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:29.556   14:30:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:29.556   14:30:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:29.556  ************************************
00:19:29.556  START TEST bdev_json_nonenclosed
00:19:29.556  ************************************
00:19:29.556   14:30:08 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:29.814  [2024-11-20 14:30:08.592750] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:29.814  [2024-11-20 14:30:08.592897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75209 ]
00:19:29.814  [2024-11-20 14:30:08.770223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:30.073  [2024-11-20 14:30:08.894355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:30.073  [2024-11-20 14:30:08.894496] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:19:30.073  [2024-11-20 14:30:08.894545] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:30.073  [2024-11-20 14:30:08.894600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:30.331  
00:19:30.331  real	0m0.669s
00:19:30.331  user	0m0.455s
00:19:30.331  sys	0m0.107s
00:19:30.331   14:30:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:30.331   14:30:09 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:19:30.331  ************************************
00:19:30.331  END TEST bdev_json_nonenclosed
00:19:30.331  ************************************
00:19:30.331   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:30.331   14:30:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:19:30.331   14:30:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:30.331   14:30:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:30.331  ************************************
00:19:30.331  START TEST bdev_json_nonarray
00:19:30.331  ************************************
00:19:30.331   14:30:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:19:30.589  [2024-11-20 14:30:09.314487] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:30.589  [2024-11-20 14:30:09.314694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75240 ]
00:19:30.589  [2024-11-20 14:30:09.495500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:30.847  [2024-11-20 14:30:09.616279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:30.847  [2024-11-20 14:30:09.616392] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:19:30.847  [2024-11-20 14:30:09.616420] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:19:30.847  [2024-11-20 14:30:09.616434] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:19:31.105  
00:19:31.105  real	0m0.658s
00:19:31.105  user	0m0.438s
00:19:31.105  sys	0m0.115s
00:19:31.105   14:30:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:31.105  ************************************
00:19:31.105   14:30:09 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:19:31.105  END TEST bdev_json_nonarray
00:19:31.105  ************************************
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]]
00:19:31.105   14:30:09 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:19:31.671  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:19:33.575  0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic
00:19:33.575  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:19:33.575  0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic
00:19:33.575  0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic
00:19:33.575  
00:19:33.575  real	0m58.315s
00:19:33.575  user	1m42.719s
00:19:33.575  sys	0m31.134s
00:19:33.575   14:30:12 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:33.575   14:30:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x
00:19:33.575  ************************************
00:19:33.575  END TEST blockdev_xnvme
00:19:33.575  ************************************
00:19:33.839   14:30:12  -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh
00:19:33.839   14:30:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:33.839   14:30:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:33.839   14:30:12  -- common/autotest_common.sh@10 -- # set +x
00:19:33.839  ************************************
00:19:33.839  START TEST ublk
00:19:33.839  ************************************
00:19:33.839   14:30:12 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh
00:19:33.839  * Looking for test storage...
00:19:33.839  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:19:33.839     14:30:12 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:19:33.839     14:30:12 ublk -- common/autotest_common.sh@1693 -- # lcov --version
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:19:33.839    14:30:12 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:33.839    14:30:12 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:33.839    14:30:12 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:33.839    14:30:12 ublk -- scripts/common.sh@336 -- # IFS=.-:
00:19:33.839    14:30:12 ublk -- scripts/common.sh@336 -- # read -ra ver1
00:19:33.839    14:30:12 ublk -- scripts/common.sh@337 -- # IFS=.-:
00:19:33.839    14:30:12 ublk -- scripts/common.sh@337 -- # read -ra ver2
00:19:33.839    14:30:12 ublk -- scripts/common.sh@338 -- # local 'op=<'
00:19:33.839    14:30:12 ublk -- scripts/common.sh@340 -- # ver1_l=2
00:19:33.839    14:30:12 ublk -- scripts/common.sh@341 -- # ver2_l=1
00:19:33.839    14:30:12 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:33.839    14:30:12 ublk -- scripts/common.sh@344 -- # case "$op" in
00:19:33.839    14:30:12 ublk -- scripts/common.sh@345 -- # : 1
00:19:33.839    14:30:12 ublk -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:33.839    14:30:12 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:33.839     14:30:12 ublk -- scripts/common.sh@365 -- # decimal 1
00:19:33.839     14:30:12 ublk -- scripts/common.sh@353 -- # local d=1
00:19:33.839     14:30:12 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:33.839     14:30:12 ublk -- scripts/common.sh@355 -- # echo 1
00:19:33.839    14:30:12 ublk -- scripts/common.sh@365 -- # ver1[v]=1
00:19:33.839     14:30:12 ublk -- scripts/common.sh@366 -- # decimal 2
00:19:33.839     14:30:12 ublk -- scripts/common.sh@353 -- # local d=2
00:19:33.839     14:30:12 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:33.839     14:30:12 ublk -- scripts/common.sh@355 -- # echo 2
00:19:33.839    14:30:12 ublk -- scripts/common.sh@366 -- # ver2[v]=2
00:19:33.839    14:30:12 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:33.839    14:30:12 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:33.839    14:30:12 ublk -- scripts/common.sh@368 -- # return 0
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:19:33.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:33.839  		--rc genhtml_branch_coverage=1
00:19:33.839  		--rc genhtml_function_coverage=1
00:19:33.839  		--rc genhtml_legend=1
00:19:33.839  		--rc geninfo_all_blocks=1
00:19:33.839  		--rc geninfo_unexecuted_blocks=1
00:19:33.839  		
00:19:33.839  		'
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:19:33.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:33.839  		--rc genhtml_branch_coverage=1
00:19:33.839  		--rc genhtml_function_coverage=1
00:19:33.839  		--rc genhtml_legend=1
00:19:33.839  		--rc geninfo_all_blocks=1
00:19:33.839  		--rc geninfo_unexecuted_blocks=1
00:19:33.839  		
00:19:33.839  		'
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:19:33.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:33.839  		--rc genhtml_branch_coverage=1
00:19:33.839  		--rc genhtml_function_coverage=1
00:19:33.839  		--rc genhtml_legend=1
00:19:33.839  		--rc geninfo_all_blocks=1
00:19:33.839  		--rc geninfo_unexecuted_blocks=1
00:19:33.839  		
00:19:33.839  		'
00:19:33.839    14:30:12 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:19:33.839  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:33.839  		--rc genhtml_branch_coverage=1
00:19:33.839  		--rc genhtml_function_coverage=1
00:19:33.840  		--rc genhtml_legend=1
00:19:33.840  		--rc geninfo_all_blocks=1
00:19:33.840  		--rc geninfo_unexecuted_blocks=1
00:19:33.840  		
00:19:33.840  		'
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh
00:19:33.840    14:30:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128
00:19:33.840    14:30:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512
00:19:33.840    14:30:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400
00:19:33.840    14:30:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096
00:19:33.840    14:30:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4
00:19:33.840    14:30:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304
00:19:33.840    14:30:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124
00:19:33.840    14:30:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]]
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv
00:19:33.840   14:30:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config
00:19:33.840   14:30:12 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:33.840   14:30:12 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:33.840   14:30:12 ublk -- common/autotest_common.sh@10 -- # set +x
00:19:33.840  ************************************
00:19:33.840  START TEST test_save_ublk_config
00:19:33.840  ************************************
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75531
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75531
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75531 ']'
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:33.840  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:33.840   14:30:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:34.101  [2024-11-20 14:30:12.927654] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:34.101  [2024-11-20 14:30:12.927830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75531 ]
00:19:34.358  [2024-11-20 14:30:13.108663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:34.358  [2024-11-20 14:30:13.211781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:35.345   14:30:13 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:35.346   14:30:13 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0
00:19:35.346   14:30:13 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0
00:19:35.346   14:30:13 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd
00:19:35.346   14:30:13 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:35.346   14:30:13 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:35.346  [2024-11-20 14:30:14.006598] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:19:35.346  [2024-11-20 14:30:14.007689] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:19:35.346  malloc0
00:19:35.346  [2024-11-20 14:30:14.086781] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128
00:19:35.346  [2024-11-20 14:30:14.086908] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0
00:19:35.346  [2024-11-20 14:30:14.086928] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:19:35.346  [2024-11-20 14:30:14.086938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:19:35.346  [2024-11-20 14:30:14.094813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:35.346  [2024-11-20 14:30:14.094852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:35.346  [2024-11-20 14:30:14.102639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:35.346  [2024-11-20 14:30:14.102822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:19:35.346  [2024-11-20 14:30:14.119613] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:19:35.346  0
00:19:35.346   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:35.346    14:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config
00:19:35.346    14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:35.346    14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:35.605    14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:35.605   14:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{
00:19:35.605  "subsystems": [
00:19:35.605  {
00:19:35.605  "subsystem": "fsdev",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.605  "method": "fsdev_set_opts",
00:19:35.605  "params": {
00:19:35.605  "fsdev_io_pool_size": 65535,
00:19:35.605  "fsdev_io_cache_size": 256
00:19:35.605  }
00:19:35.605  }
00:19:35.605  ]
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "keyring",
00:19:35.605  "config": []
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "iobuf",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.605  "method": "iobuf_set_options",
00:19:35.605  "params": {
00:19:35.605  "small_pool_count": 8192,
00:19:35.605  "large_pool_count": 1024,
00:19:35.605  "small_bufsize": 8192,
00:19:35.605  "large_bufsize": 135168,
00:19:35.605  "enable_numa": false
00:19:35.605  }
00:19:35.605  }
00:19:35.605  ]
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "sock",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.605  "method": "sock_set_default_impl",
00:19:35.605  "params": {
00:19:35.605  "impl_name": "posix"
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "sock_impl_set_options",
00:19:35.605  "params": {
00:19:35.605  "impl_name": "ssl",
00:19:35.605  "recv_buf_size": 4096,
00:19:35.605  "send_buf_size": 4096,
00:19:35.605  "enable_recv_pipe": true,
00:19:35.605  "enable_quickack": false,
00:19:35.605  "enable_placement_id": 0,
00:19:35.605  "enable_zerocopy_send_server": true,
00:19:35.605  "enable_zerocopy_send_client": false,
00:19:35.605  "zerocopy_threshold": 0,
00:19:35.605  "tls_version": 0,
00:19:35.605  "enable_ktls": false
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "sock_impl_set_options",
00:19:35.605  "params": {
00:19:35.605  "impl_name": "posix",
00:19:35.605  "recv_buf_size": 2097152,
00:19:35.605  "send_buf_size": 2097152,
00:19:35.605  "enable_recv_pipe": true,
00:19:35.605  "enable_quickack": false,
00:19:35.605  "enable_placement_id": 0,
00:19:35.605  "enable_zerocopy_send_server": true,
00:19:35.605  "enable_zerocopy_send_client": false,
00:19:35.605  "zerocopy_threshold": 0,
00:19:35.605  "tls_version": 0,
00:19:35.605  "enable_ktls": false
00:19:35.605  }
00:19:35.605  }
00:19:35.605  ]
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "vmd",
00:19:35.605  "config": []
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "accel",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.605  "method": "accel_set_options",
00:19:35.605  "params": {
00:19:35.605  "small_cache_size": 128,
00:19:35.605  "large_cache_size": 16,
00:19:35.605  "task_count": 2048,
00:19:35.605  "sequence_count": 2048,
00:19:35.605  "buf_count": 2048
00:19:35.605  }
00:19:35.605  }
00:19:35.605  ]
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "bdev",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.605  "method": "bdev_set_options",
00:19:35.605  "params": {
00:19:35.605  "bdev_io_pool_size": 65535,
00:19:35.605  "bdev_io_cache_size": 256,
00:19:35.605  "bdev_auto_examine": true,
00:19:35.605  "iobuf_small_cache_size": 128,
00:19:35.605  "iobuf_large_cache_size": 16
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "bdev_raid_set_options",
00:19:35.605  "params": {
00:19:35.605  "process_window_size_kb": 1024,
00:19:35.605  "process_max_bandwidth_mb_sec": 0
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "bdev_iscsi_set_options",
00:19:35.605  "params": {
00:19:35.605  "timeout_sec": 30
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "bdev_nvme_set_options",
00:19:35.605  "params": {
00:19:35.605  "action_on_timeout": "none",
00:19:35.605  "timeout_us": 0,
00:19:35.605  "timeout_admin_us": 0,
00:19:35.605  "keep_alive_timeout_ms": 10000,
00:19:35.605  "arbitration_burst": 0,
00:19:35.605  "low_priority_weight": 0,
00:19:35.605  "medium_priority_weight": 0,
00:19:35.605  "high_priority_weight": 0,
00:19:35.605  "nvme_adminq_poll_period_us": 10000,
00:19:35.605  "nvme_ioq_poll_period_us": 0,
00:19:35.605  "io_queue_requests": 0,
00:19:35.605  "delay_cmd_submit": true,
00:19:35.605  "transport_retry_count": 4,
00:19:35.605  "bdev_retry_count": 3,
00:19:35.605  "transport_ack_timeout": 0,
00:19:35.605  "ctrlr_loss_timeout_sec": 0,
00:19:35.605  "reconnect_delay_sec": 0,
00:19:35.605  "fast_io_fail_timeout_sec": 0,
00:19:35.605  "disable_auto_failback": false,
00:19:35.605  "generate_uuids": false,
00:19:35.605  "transport_tos": 0,
00:19:35.605  "nvme_error_stat": false,
00:19:35.605  "rdma_srq_size": 0,
00:19:35.605  "io_path_stat": false,
00:19:35.605  "allow_accel_sequence": false,
00:19:35.605  "rdma_max_cq_size": 0,
00:19:35.605  "rdma_cm_event_timeout_ms": 0,
00:19:35.605  "dhchap_digests": [
00:19:35.605  "sha256",
00:19:35.605  "sha384",
00:19:35.605  "sha512"
00:19:35.605  ],
00:19:35.605  "dhchap_dhgroups": [
00:19:35.605  "null",
00:19:35.605  "ffdhe2048",
00:19:35.605  "ffdhe3072",
00:19:35.605  "ffdhe4096",
00:19:35.605  "ffdhe6144",
00:19:35.605  "ffdhe8192"
00:19:35.605  ]
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "bdev_nvme_set_hotplug",
00:19:35.605  "params": {
00:19:35.605  "period_us": 100000,
00:19:35.605  "enable": false
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "bdev_malloc_create",
00:19:35.605  "params": {
00:19:35.605  "name": "malloc0",
00:19:35.605  "num_blocks": 8192,
00:19:35.605  "block_size": 4096,
00:19:35.605  "physical_block_size": 4096,
00:19:35.605  "uuid": "d8ec78aa-7830-455c-b6d9-85cf82073c7e",
00:19:35.605  "optimal_io_boundary": 0,
00:19:35.605  "md_size": 0,
00:19:35.605  "dif_type": 0,
00:19:35.605  "dif_is_head_of_md": false,
00:19:35.605  "dif_pi_format": 0
00:19:35.605  }
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "method": "bdev_wait_for_examine"
00:19:35.605  }
00:19:35.605  ]
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "scsi",
00:19:35.605  "config": null
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "scheduler",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.605  "method": "framework_set_scheduler",
00:19:35.605  "params": {
00:19:35.605  "name": "static"
00:19:35.605  }
00:19:35.605  }
00:19:35.605  ]
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "vhost_scsi",
00:19:35.605  "config": []
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "vhost_blk",
00:19:35.605  "config": []
00:19:35.605  },
00:19:35.605  {
00:19:35.605  "subsystem": "ublk",
00:19:35.605  "config": [
00:19:35.605  {
00:19:35.606  "method": "ublk_create_target",
00:19:35.606  "params": {
00:19:35.606  "cpumask": "1"
00:19:35.606  }
00:19:35.606  },
00:19:35.606  {
00:19:35.606  "method": "ublk_start_disk",
00:19:35.606  "params": {
00:19:35.606  "bdev_name": "malloc0",
00:19:35.606  "ublk_id": 0,
00:19:35.606  "num_queues": 1,
00:19:35.606  "queue_depth": 128
00:19:35.606  }
00:19:35.606  }
00:19:35.606  ]
00:19:35.606  },
00:19:35.606  {
00:19:35.606  "subsystem": "nbd",
00:19:35.606  "config": []
00:19:35.606  },
00:19:35.606  {
00:19:35.606  "subsystem": "nvmf",
00:19:35.606  "config": [
00:19:35.606  {
00:19:35.606  "method": "nvmf_set_config",
00:19:35.606  "params": {
00:19:35.606  "discovery_filter": "match_any",
00:19:35.606  "admin_cmd_passthru": {
00:19:35.606  "identify_ctrlr": false
00:19:35.606  },
00:19:35.606  "dhchap_digests": [
00:19:35.606  "sha256",
00:19:35.606  "sha384",
00:19:35.606  "sha512"
00:19:35.606  ],
00:19:35.606  "dhchap_dhgroups": [
00:19:35.606  "null",
00:19:35.606  "ffdhe2048",
00:19:35.606  "ffdhe3072",
00:19:35.606  "ffdhe4096",
00:19:35.606  "ffdhe6144",
00:19:35.606  "ffdhe8192"
00:19:35.606  ]
00:19:35.606  }
00:19:35.606  },
00:19:35.606  {
00:19:35.606  "method": "nvmf_set_max_subsystems",
00:19:35.606  "params": {
00:19:35.606  "max_subsystems": 1024
00:19:35.606  }
00:19:35.606  },
00:19:35.606  {
00:19:35.606  "method": "nvmf_set_crdt",
00:19:35.606  "params": {
00:19:35.606  "crdt1": 0,
00:19:35.606  "crdt2": 0,
00:19:35.606  "crdt3": 0
00:19:35.606  }
00:19:35.606  }
00:19:35.606  ]
00:19:35.606  },
00:19:35.606  {
00:19:35.606  "subsystem": "iscsi",
00:19:35.606  "config": [
00:19:35.606  {
00:19:35.606  "method": "iscsi_set_options",
00:19:35.606  "params": {
00:19:35.606  "node_base": "iqn.2016-06.io.spdk",
00:19:35.606  "max_sessions": 128,
00:19:35.606  "max_connections_per_session": 2,
00:19:35.606  "max_queue_depth": 64,
00:19:35.606  "default_time2wait": 2,
00:19:35.606  "default_time2retain": 20,
00:19:35.606  "first_burst_length": 8192,
00:19:35.606  "immediate_data": true,
00:19:35.606  "allow_duplicated_isid": false,
00:19:35.606  "error_recovery_level": 0,
00:19:35.606  "nop_timeout": 60,
00:19:35.606  "nop_in_interval": 30,
00:19:35.606  "disable_chap": false,
00:19:35.606  "require_chap": false,
00:19:35.606  "mutual_chap": false,
00:19:35.606  "chap_group": 0,
00:19:35.606  "max_large_datain_per_connection": 64,
00:19:35.606  "max_r2t_per_connection": 4,
00:19:35.606  "pdu_pool_size": 36864,
00:19:35.606  "immediate_data_pool_size": 16384,
00:19:35.606  "data_out_pool_size": 2048
00:19:35.606  }
00:19:35.606  }
00:19:35.606  ]
00:19:35.606  }
00:19:35.606  ]
00:19:35.606  }'
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75531
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75531 ']'
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75531
00:19:35.606    14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:35.606    14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75531
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:35.606  killing process with pid 75531
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75531'
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75531
00:19:35.606   14:30:14 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75531
00:19:37.000  [2024-11-20 14:30:15.739966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:19:37.000  [2024-11-20 14:30:15.776706] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:37.000  [2024-11-20 14:30:15.776890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:19:37.000  [2024-11-20 14:30:15.781616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:37.000  [2024-11-20 14:30:15.781678] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:19:37.000  [2024-11-20 14:30:15.781698] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:19:37.000  [2024-11-20 14:30:15.781734] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:19:37.000  [2024-11-20 14:30:15.781927] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:19:38.928   14:30:17 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75594
00:19:38.928   14:30:17 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75594
00:19:38.928    14:30:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{
00:19:38.928  "subsystems": [
00:19:38.928  {
00:19:38.928  "subsystem": "fsdev",
00:19:38.928  "config": [
00:19:38.928  {
00:19:38.928  "method": "fsdev_set_opts",
00:19:38.928  "params": {
00:19:38.928  "fsdev_io_pool_size": 65535,
00:19:38.928  "fsdev_io_cache_size": 256
00:19:38.928  }
00:19:38.928  }
00:19:38.928  ]
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "subsystem": "keyring",
00:19:38.928  "config": []
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "subsystem": "iobuf",
00:19:38.928  "config": [
00:19:38.928  {
00:19:38.928  "method": "iobuf_set_options",
00:19:38.928  "params": {
00:19:38.928  "small_pool_count": 8192,
00:19:38.928  "large_pool_count": 1024,
00:19:38.928  "small_bufsize": 8192,
00:19:38.928  "large_bufsize": 135168,
00:19:38.928  "enable_numa": false
00:19:38.928  }
00:19:38.928  }
00:19:38.928  ]
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "subsystem": "sock",
00:19:38.928  "config": [
00:19:38.928  {
00:19:38.928  "method": "sock_set_default_impl",
00:19:38.928  "params": {
00:19:38.928  "impl_name": "posix"
00:19:38.928  }
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "method": "sock_impl_set_options",
00:19:38.928  "params": {
00:19:38.928  "impl_name": "ssl",
00:19:38.928  "recv_buf_size": 4096,
00:19:38.928  "send_buf_size": 4096,
00:19:38.928  "enable_recv_pipe": true,
00:19:38.928  "enable_quickack": false,
00:19:38.928  "enable_placement_id": 0,
00:19:38.928  "enable_zerocopy_send_server": true,
00:19:38.928  "enable_zerocopy_send_client": false,
00:19:38.928  "zerocopy_threshold": 0,
00:19:38.928  "tls_version": 0,
00:19:38.928  "enable_ktls": false
00:19:38.928  }
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "method": "sock_impl_set_options",
00:19:38.928  "params": {
00:19:38.928  "impl_name": "posix",
00:19:38.928  "recv_buf_size": 2097152,
00:19:38.928  "send_buf_size": 2097152,
00:19:38.928  "enable_recv_pipe": true,
00:19:38.928  "enable_quickack": false,
00:19:38.928  "enable_placement_id": 0,
00:19:38.928  "enable_zerocopy_send_server": true,
00:19:38.928  "enable_zerocopy_send_client": false,
00:19:38.928  "zerocopy_threshold": 0,
00:19:38.928  "tls_version": 0,
00:19:38.928  "enable_ktls": false
00:19:38.928  }
00:19:38.928  }
00:19:38.928  ]
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "subsystem": "vmd",
00:19:38.928  "config": []
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "subsystem": "accel",
00:19:38.928  "config": [
00:19:38.928  {
00:19:38.928  "method": "accel_set_options",
00:19:38.928  "params": {
00:19:38.928  "small_cache_size": 128,
00:19:38.928  "large_cache_size": 16,
00:19:38.928  "task_count": 2048,
00:19:38.928  "sequence_count": 2048,
00:19:38.928  "buf_count": 2048
00:19:38.928  }
00:19:38.928  }
00:19:38.928  ]
00:19:38.928  },
00:19:38.928  {
00:19:38.928  "subsystem": "bdev",
00:19:38.928  "config": [
00:19:38.928  {
00:19:38.928  "method": "bdev_set_options",
00:19:38.928  "params": {
00:19:38.928  "bdev_io_pool_size": 65535,
00:19:38.928  "bdev_io_cache_size": 256,
00:19:38.929  "bdev_auto_examine": true,
00:19:38.929  "iobuf_small_cache_size": 128,
00:19:38.929  "iobuf_large_cache_size": 16
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "bdev_raid_set_options",
00:19:38.929  "params": {
00:19:38.929  "process_window_size_kb": 1024,
00:19:38.929  "process_max_bandwidth_mb_sec": 0
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "bdev_iscsi_set_options",
00:19:38.929  "params": {
00:19:38.929  "timeout_sec": 30
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "bdev_nvme_set_options",
00:19:38.929  "params": {
00:19:38.929  "action_on_timeout": "none",
00:19:38.929  "timeout_us": 0,
00:19:38.929  "timeout_admin_us": 0,
00:19:38.929  "keep_alive_timeout_ms": 10000,
00:19:38.929  "arbitration_burst": 0,
00:19:38.929  "low_priority_weight": 0,
00:19:38.929  "medium_priority_weight": 0,
00:19:38.929  "high_priority_weight": 0,
00:19:38.929  "nvme_adminq_poll_period_us": 10000,
00:19:38.929  "nvme_ioq_poll_period_us": 0,
00:19:38.929  "io_queue_requests": 0,
00:19:38.929  "delay_cmd_submit": true,
00:19:38.929  "transport_retry_count": 4,
00:19:38.929  "bdev_retry_count": 3,
00:19:38.929  "transport_ack_timeout": 0,
00:19:38.929  "ctrlr_loss_timeout_sec": 0,
00:19:38.929  "reconnect_delay_sec": 0,
00:19:38.929  "fast_io_fail_timeout_sec": 0,
00:19:38.929  "disable_auto_failback": false,
00:19:38.929  "generate_uuids": false,
00:19:38.929  "transport_tos": 0,
00:19:38.929  "nvme_error_stat": false,
00:19:38.929  "rdma_srq_size": 0,
00:19:38.929  "io_path_stat": false,
00:19:38.929  "allow_accel_sequence": false,
00:19:38.929  "rdma_max_cq_size": 0,
00:19:38.929  "rdma_cm_event_timeout_ms": 0,
00:19:38.929  "dhchap_digests": [
00:19:38.929  "sha256",
00:19:38.929  "sha384",
00:19:38.929  "sha512"
00:19:38.929  ],
00:19:38.929  "dhchap_dhgroups": [
00:19:38.929  "null",
00:19:38.929  "ffdhe2048",
00:19:38.929  "ffdhe3072",
00:19:38.929  "ffdhe4096",
00:19:38.929  "ffdhe6144",
00:19:38.929  "ffdhe8192"
00:19:38.929  ]
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "bdev_nvme_set_hotplug",
00:19:38.929  "params": {
00:19:38.929  "period_us": 100000,
00:19:38.929  "enable": false
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "bdev_malloc_create",
00:19:38.929  "params": {
00:19:38.929  "name": "malloc0",
00:19:38.929  "num_blocks": 8192,
00:19:38.929  "block_size": 4096,
00:19:38.929  "physical_block_size": 4096,
00:19:38.929  "uuid": "d8ec78aa-7830-455c-b6d9-85cf82073c7e",
00:19:38.929  "optimal_io_boundary": 0,
00:19:38.929  "md_size": 0,
00:19:38.929  "dif_type": 0,
00:19:38.929  "dif_is_head_of_md": false,
00:19:38.929  "dif_pi_format": 0
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "bdev_wait_for_examine"
00:19:38.929  }
00:19:38.929  ]
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "scsi",
00:19:38.929  "config": null
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "scheduler",
00:19:38.929  "config": [
00:19:38.929  {
00:19:38.929  "method": "framework_set_scheduler",
00:19:38.929  "params": {
00:19:38.929  "name": "static"
00:19:38.929  }
00:19:38.929  }
00:19:38.929  ]
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "vhost_scsi",
00:19:38.929  "config": []
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "vhost_blk",
00:19:38.929  "config": []
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "ublk",
00:19:38.929  "config": [
00:19:38.929  {
00:19:38.929  "method": "ublk_create_target",
00:19:38.929  "params": {
00:19:38.929  "cpumask": "1"
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "ublk_start_disk",
00:19:38.929  "params": {
00:19:38.929  "bdev_name": "malloc0",
00:19:38.929  "ublk_id": 0,
00:19:38.929  "num_queues": 1,
00:19:38.929  "queue_depth": 128
00:19:38.929  }
00:19:38.929  }
00:19:38.929  ]
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "nbd",
00:19:38.929  "config": []
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "nvmf",
00:19:38.929  "config": [
00:19:38.929  {
00:19:38.929  "method": "nvmf_set_config",
00:19:38.929  "params": {
00:19:38.929  "discovery_filter": "match_any",
00:19:38.929  "admin_cmd_passthru": {
00:19:38.929  "identify_ctrlr": false
00:19:38.929  },
00:19:38.929  "dhchap_digests": [
00:19:38.929  "sha256",
00:19:38.929  "sha384",
00:19:38.929  "sha512"
00:19:38.929  ],
00:19:38.929  "dhchap_dhgroups": [
00:19:38.929  "null",
00:19:38.929  "ffdhe2048",
00:19:38.929  "ffdhe3072",
00:19:38.929  "ffdhe4096",
00:19:38.929  "ffdhe6144",
00:19:38.929  "ffdhe81 14:30:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63
00:19:38.929  92"
00:19:38.929  ]
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "nvmf_set_max_subsystems",
00:19:38.929  "params": {
00:19:38.929  "max_subsystems": 1024
00:19:38.929  }
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "method": "nvmf_set_crdt",
00:19:38.929  "params": {
00:19:38.929  "crdt1": 0,
00:19:38.929  "crdt2": 0,
00:19:38.929  "crdt3": 0
00:19:38.929  }
00:19:38.929  }
00:19:38.929  ]
00:19:38.929  },
00:19:38.929  {
00:19:38.929  "subsystem": "iscsi",
00:19:38.929  "config": [
00:19:38.929  {
00:19:38.929  "method": "iscsi_set_options",
00:19:38.929  "params": {
00:19:38.929  "node_base": "iqn.2016-06.io.spdk",
00:19:38.929  "max_sessions": 128,
00:19:38.929  "max_connections_per_session": 2,
00:19:38.929  "max_queue_depth": 64,
00:19:38.929  "default_time2wait": 2,
00:19:38.929  "default_time2retain": 20,
00:19:38.929  "first_burst_length": 8192,
00:19:38.929  "immediate_data": true,
00:19:38.929  "allow_duplicated_isid": false,
00:19:38.929  "error_recovery_level": 0,
00:19:38.929  "nop_timeout": 60,
00:19:38.929  "nop_in_interval": 30,
00:19:38.929  "disable_chap": false,
00:19:38.929  "require_chap": false,
00:19:38.929  "mutual_chap": false,
00:19:38.929  "chap_group": 0,
00:19:38.929  "max_large_datain_per_connection": 64,
00:19:38.929  "max_r2t_per_connection": 4,
00:19:38.929  "pdu_pool_size": 36864,
00:19:38.929  "immediate_data_pool_size": 16384,
00:19:38.929  "data_out_pool_size": 2048
00:19:38.929  }
00:19:38.929  }
00:19:38.929  ]
00:19:38.929  }
00:19:38.929  ]
00:19:38.929  }'
00:19:38.929   14:30:17 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75594 ']'
00:19:38.929   14:30:17 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:38.929   14:30:17 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:38.929   14:30:17 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:38.929  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:38.929   14:30:17 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:38.929   14:30:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:38.929  [2024-11-20 14:30:17.630423] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:38.929  [2024-11-20 14:30:17.630648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75594 ]
00:19:38.929  [2024-11-20 14:30:17.812394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:39.188  [2024-11-20 14:30:17.917245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:40.122  [2024-11-20 14:30:18.879600] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:19:40.122  [2024-11-20 14:30:18.880798] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:19:40.122  [2024-11-20 14:30:18.887753] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128
00:19:40.122  [2024-11-20 14:30:18.887855] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0
00:19:40.122  [2024-11-20 14:30:18.887874] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:19:40.122  [2024-11-20 14:30:18.887893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:19:40.122  [2024-11-20 14:30:18.896690] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:40.122  [2024-11-20 14:30:18.896720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:40.122  [2024-11-20 14:30:18.903628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:40.122  [2024-11-20 14:30:18.903771] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:19:40.122  [2024-11-20 14:30:18.920598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:19:40.122   14:30:18 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:40.122   14:30:18 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0
00:19:40.122    14:30:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks
00:19:40.122    14:30:18 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device'
00:19:40.122    14:30:18 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:40.122    14:30:18 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:40.122    14:30:18 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]]
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]]
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75594
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75594 ']'
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75594
00:19:40.122    14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:40.122    14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75594
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:40.122  killing process with pid 75594
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75594'
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75594
00:19:40.122   14:30:19 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75594
00:19:42.036  [2024-11-20 14:30:20.770367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:19:42.036  [2024-11-20 14:30:20.806683] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:42.036  [2024-11-20 14:30:20.806851] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:19:42.036  [2024-11-20 14:30:20.816608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:42.036  [2024-11-20 14:30:20.816671] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:19:42.036  [2024-11-20 14:30:20.816685] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:19:42.036  [2024-11-20 14:30:20.816720] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:19:42.036  [2024-11-20 14:30:20.816908] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:19:43.950   14:30:22 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT
00:19:43.950  
00:19:43.950  real	0m9.768s
00:19:43.950  user	0m7.660s
00:19:43.950  sys	0m3.279s
00:19:43.950   14:30:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:43.950   14:30:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x
00:19:43.950  ************************************
00:19:43.950  END TEST test_save_ublk_config
00:19:43.950  ************************************
00:19:43.950   14:30:22 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75680
00:19:43.950   14:30:22 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:19:43.950   14:30:22 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:19:43.950   14:30:22 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75680
00:19:43.950   14:30:22 ublk -- common/autotest_common.sh@835 -- # '[' -z 75680 ']'
00:19:43.950   14:30:22 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:43.950   14:30:22 ublk -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:43.950   14:30:22 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:43.950  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:43.950   14:30:22 ublk -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:43.950   14:30:22 ublk -- common/autotest_common.sh@10 -- # set +x
00:19:43.950  [2024-11-20 14:30:22.735538] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:19:43.950  [2024-11-20 14:30:22.735727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75680 ]
00:19:43.950  [2024-11-20 14:30:22.922058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:19:44.209  [2024-11-20 14:30:23.025895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:44.209  [2024-11-20 14:30:23.025907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:45.143   14:30:23 ublk -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:45.143   14:30:23 ublk -- common/autotest_common.sh@868 -- # return 0
00:19:45.143   14:30:23 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk
00:19:45.143   14:30:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:45.143   14:30:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:45.143   14:30:23 ublk -- common/autotest_common.sh@10 -- # set +x
00:19:45.143  ************************************
00:19:45.143  START TEST test_create_ublk
00:19:45.143  ************************************
00:19:45.143   14:30:23 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk
00:19:45.143    14:30:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target
00:19:45.143    14:30:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.143    14:30:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:45.143  [2024-11-20 14:30:23.814598] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:19:45.143  [2024-11-20 14:30:23.817028] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:19:45.143    14:30:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.143   14:30:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target=
00:19:45.143    14:30:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096
00:19:45.143    14:30:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.143    14:30:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:45.143    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.143   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0
00:19:45.143    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512
00:19:45.143    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.143    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:45.143  [2024-11-20 14:30:24.070804] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512
00:19:45.143  [2024-11-20 14:30:24.071400] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0
00:19:45.143  [2024-11-20 14:30:24.071431] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:19:45.143  [2024-11-20 14:30:24.071443] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:19:45.143  [2024-11-20 14:30:24.078949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:45.143  [2024-11-20 14:30:24.078983] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:45.143  [2024-11-20 14:30:24.086658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:45.143  [2024-11-20 14:30:24.087483] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:19:45.143  [2024-11-20 14:30:24.103639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:19:45.143    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.143   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0
00:19:45.143   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0
00:19:45.143    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0
00:19:45.143    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:45.143    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:45.401    14:30:24 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[
00:19:45.401  {
00:19:45.401  "ublk_device": "/dev/ublkb0",
00:19:45.401  "id": 0,
00:19:45.401  "queue_depth": 512,
00:19:45.401  "num_queues": 4,
00:19:45.401  "bdev_name": "Malloc0"
00:19:45.401  }
00:19:45.401  ]'
00:19:45.401    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device'
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]]
00:19:45.401    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id'
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]]
00:19:45.401    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth'
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]]
00:19:45.401    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues'
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]]
00:19:45.401    14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name'
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]]
00:19:45.401   14:30:24 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10'
00:19:45.401   14:30:24 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0
00:19:45.401   14:30:24 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0
00:19:45.401   14:30:24 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728
00:19:45.401   14:30:24 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write
00:19:45.401   14:30:24 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc
00:19:45.401   14:30:24 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10'
00:19:45.402   14:30:24 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template=
00:19:45.402   14:30:24 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]]
00:19:45.402   14:30:24 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0'
00:19:45.402   14:30:24 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0'
00:19:45.402   14:30:24 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0
00:19:45.660  fio: verification read phase will never start because write phase uses all of runtime
00:19:45.660  fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
00:19:45.660  fio-3.35
00:19:45.660  Starting 1 process
00:19:55.623  
00:19:55.623  fio_test: (groupid=0, jobs=1): err= 0: pid=75728: Wed Nov 20 14:30:34 2024
00:19:55.623    write: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(442MiB/10001msec); 0 zone resets
00:19:55.623      clat (usec): min=55, max=4019, avg=86.66, stdev=132.63
00:19:55.623       lat (usec): min=55, max=4020, avg=87.55, stdev=132.66
00:19:55.623      clat percentiles (usec):
00:19:55.623       |  1.00th=[   62],  5.00th=[   71], 10.00th=[   72], 20.00th=[   74],
00:19:55.623       | 30.00th=[   75], 40.00th=[   75], 50.00th=[   76], 60.00th=[   78],
00:19:55.623       | 70.00th=[   80], 80.00th=[   84], 90.00th=[   92], 95.00th=[  108],
00:19:55.623       | 99.00th=[  135], 99.50th=[  159], 99.90th=[ 2737], 99.95th=[ 3228],
00:19:55.623       | 99.99th=[ 3720]
00:19:55.623     bw (  KiB/s): min=31808, max=48984, per=100.00%, avg=45326.74, stdev=3606.87, samples=19
00:19:55.623     iops        : min= 7952, max=12246, avg=11331.68, stdev=901.72, samples=19
00:19:55.623    lat (usec)   : 100=93.06%, 250=6.55%, 500=0.04%, 750=0.02%, 1000=0.02%
00:19:55.623    lat (msec)   : 2=0.11%, 4=0.19%, 10=0.01%
00:19:55.623    cpu          : usr=2.76%, sys=8.07%, ctx=113279, majf=0, minf=796
00:19:55.623    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:19:55.623       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:55.623       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:55.623       issued rwts: total=0,113249,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:55.623       latency   : target=0, window=0, percentile=100.00%, depth=1
00:19:55.623  
00:19:55.623  Run status group 0 (all jobs):
00:19:55.623    WRITE: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=442MiB (464MB), run=10001-10001msec
00:19:55.623  
00:19:55.623  Disk stats (read/write):
00:19:55.623    ublkb0: ios=0/112071, merge=0/0, ticks=0/8863, in_queue=8863, util=99.03%
00:19:55.623   14:30:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0
00:19:55.623   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.623   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:55.623  [2024-11-20 14:30:34.599050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:19:55.881  [2024-11-20 14:30:34.635092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:55.881  [2024-11-20 14:30:34.636134] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:19:55.881  [2024-11-20 14:30:34.642633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:55.881  [2024-11-20 14:30:34.642956] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:19:55.881  [2024-11-20 14:30:34.642983] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:55.881   14:30:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:55.881    14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:55.881  [2024-11-20 14:30:34.658697] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0
00:19:55.881  request:
00:19:55.881  {
00:19:55.881  "ublk_id": 0,
00:19:55.881  "method": "ublk_stop_disk",
00:19:55.881  "req_id": 1
00:19:55.881  }
00:19:55.881  Got JSON-RPC error response
00:19:55.881  response:
00:19:55.881  {
00:19:55.881  "code": -19,
00:19:55.881  "message": "No such device"
00:19:55.881  }
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:19:55.881   14:30:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:55.881  [2024-11-20 14:30:34.674697] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:19:55.881  [2024-11-20 14:30:34.682614] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:19:55.881  [2024-11-20 14:30:34.682711] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:55.881   14:30:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:55.881   14:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.446   14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.446   14:30:35 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices
00:19:56.446    14:30:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs
00:19:56.446    14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.446    14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.446    14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.446   14:30:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]'
00:19:56.446    14:30:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length
00:19:56.446   14:30:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']'
00:19:56.446    14:30:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores
00:19:56.446    14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.446    14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.446    14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.446   14:30:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]'
00:19:56.446    14:30:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length
00:19:56.734   14:30:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']'
00:19:56.734  
00:19:56.734  real	0m11.649s
00:19:56.734  user	0m0.694s
00:19:56.734  sys	0m0.900s
00:19:56.734   14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:56.734  ************************************
00:19:56.734  END TEST test_create_ublk
00:19:56.734  ************************************
00:19:56.734   14:30:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.734   14:30:35 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk
00:19:56.734   14:30:35 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:19:56.734   14:30:35 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:56.734   14:30:35 ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.734  ************************************
00:19:56.734  START TEST test_create_multi_ublk
00:19:56.734  ************************************
00:19:56.734   14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.734  [2024-11-20 14:30:35.509597] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:19:56.734  [2024-11-20 14:30:35.511990] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.734   14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target=
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3
00:19:56.734   14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.734    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.992   14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:56.992  [2024-11-20 14:30:35.797834] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512
00:19:56.992  [2024-11-20 14:30:35.798332] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0
00:19:56.992  [2024-11-20 14:30:35.798354] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq
00:19:56.992  [2024-11-20 14:30:35.798370] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV
00:19:56.992  [2024-11-20 14:30:35.806819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:56.992  [2024-11-20 14:30:35.806884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:56.992  [2024-11-20 14:30:35.813678] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:56.992  [2024-11-20 14:30:35.814612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV
00:19:56.992  [2024-11-20 14:30:35.824316] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:56.992   14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0
00:19:56.992   14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:56.992    14:30:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.250   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.250  [2024-11-20 14:30:36.109779] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512
00:19:57.250  [2024-11-20 14:30:36.110276] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1
00:19:57.250  [2024-11-20 14:30:36.110304] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:19:57.250  [2024-11-20 14:30:36.110315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV
00:19:57.250  [2024-11-20 14:30:36.117627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:57.250  [2024-11-20 14:30:36.117656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:57.250  [2024-11-20 14:30:36.125622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:57.250  [2024-11-20 14:30:36.126372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV
00:19:57.250  [2024-11-20 14:30:36.134663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.250   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1
00:19:57.250   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.250    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.508    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.508   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.509  [2024-11-20 14:30:36.389739] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512
00:19:57.509  [2024-11-20 14:30:36.390227] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2
00:19:57.509  [2024-11-20 14:30:36.390250] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq
00:19:57.509  [2024-11-20 14:30:36.390263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV
00:19:57.509  [2024-11-20 14:30:36.397628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:57.509  [2024-11-20 14:30:36.397663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:57.509  [2024-11-20 14:30:36.405601] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:57.509  [2024-11-20 14:30:36.406327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV
00:19:57.509  [2024-11-20 14:30:36.414672] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.509   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2
00:19:57.509   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.509    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.766    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.767   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.767  [2024-11-20 14:30:36.670775] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512
00:19:57.767  [2024-11-20 14:30:36.671289] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3
00:19:57.767  [2024-11-20 14:30:36.671309] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq
00:19:57.767  [2024-11-20 14:30:36.671319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV
00:19:57.767  [2024-11-20 14:30:36.677629] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed
00:19:57.767  [2024-11-20 14:30:36.677661] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS
00:19:57.767  [2024-11-20 14:30:36.685625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:19:57.767  [2024-11-20 14:30:36.686330] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV
00:19:57.767  [2024-11-20 14:30:36.692705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.767   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:57.767   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[
00:19:57.767  {
00:19:57.767  "ublk_device": "/dev/ublkb0",
00:19:57.767  "id": 0,
00:19:57.767  "queue_depth": 512,
00:19:57.767  "num_queues": 4,
00:19:57.767  "bdev_name": "Malloc0"
00:19:57.767  },
00:19:57.767  {
00:19:57.767  "ublk_device": "/dev/ublkb1",
00:19:57.767  "id": 1,
00:19:57.767  "queue_depth": 512,
00:19:57.767  "num_queues": 4,
00:19:57.767  "bdev_name": "Malloc1"
00:19:57.767  },
00:19:57.767  {
00:19:57.767  "ublk_device": "/dev/ublkb2",
00:19:57.767  "id": 2,
00:19:57.767  "queue_depth": 512,
00:19:57.767  "num_queues": 4,
00:19:57.767  "bdev_name": "Malloc2"
00:19:57.767  },
00:19:57.767  {
00:19:57.767  "ublk_device": "/dev/ublkb3",
00:19:57.767  "id": 3,
00:19:57.767  "queue_depth": 512,
00:19:57.767  "num_queues": 4,
00:19:57.767  "bdev_name": "Malloc3"
00:19:57.767  }
00:19:57.767  ]'
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3
00:19:57.767   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:57.767    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device'
00:19:58.025   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]]
00:19:58.025    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id'
00:19:58.025   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]]
00:19:58.025    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth'
00:19:58.025   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:19:58.025    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues'
00:19:58.025   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:19:58.025    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name'
00:19:58.025   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]]
00:19:58.025   14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:58.025    14:30:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device'
00:19:58.284   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]]
00:19:58.284    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id'
00:19:58.284   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]]
00:19:58.284    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth'
00:19:58.284   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:19:58.284    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues'
00:19:58.284   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:19:58.284    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name'
00:19:58.284   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]]
00:19:58.284   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:58.284    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device'
00:19:58.542   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]]
00:19:58.542    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id'
00:19:58.542   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]]
00:19:58.542    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth'
00:19:58.542   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:19:58.542    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues'
00:19:58.542   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:19:58.542    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name'
00:19:58.542   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]]
00:19:58.542   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:58.542    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device'
00:19:58.799   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]]
00:19:58.799    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id'
00:19:58.799   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]]
00:19:58.799    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth'
00:19:58.799   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]]
00:19:58.799    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues'
00:19:58.799   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]]
00:19:58.799    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name'
00:19:58.799   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]]
00:19:58.799   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]]
00:19:58.799    14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3
00:19:58.800   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:58.800   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0
00:19:58.800   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:58.800   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:58.800  [2024-11-20 14:30:37.755424] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV
00:19:59.057  [2024-11-20 14:30:37.797682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:59.057  [2024-11-20 14:30:37.798740] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV
00:19:59.057  [2024-11-20 14:30:37.806662] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:59.057  [2024-11-20 14:30:37.806986] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq
00:19:59.057  [2024-11-20 14:30:37.807013] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:59.057  [2024-11-20 14:30:37.821719] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV
00:19:59.057  [2024-11-20 14:30:37.863126] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:59.057  [2024-11-20 14:30:37.864472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV
00:19:59.057  [2024-11-20 14:30:37.869634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:59.057  [2024-11-20 14:30:37.869989] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq
00:19:59.057  [2024-11-20 14:30:37.870025] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:59.057   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:59.057  [2024-11-20 14:30:37.884807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV
00:19:59.057  [2024-11-20 14:30:37.923065] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:59.057  [2024-11-20 14:30:37.924219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV
00:19:59.057  [2024-11-20 14:30:37.933615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:59.057  [2024-11-20 14:30:37.933931] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq
00:19:59.058  [2024-11-20 14:30:37.933951] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:19:59.058  [2024-11-20 14:30:37.949734] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV
00:19:59.058  [2024-11-20 14:30:37.981666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed
00:19:59.058  [2024-11-20 14:30:37.982547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV
00:19:59.058  [2024-11-20 14:30:37.989615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed
00:19:59.058  [2024-11-20 14:30:37.989971] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq
00:19:59.058  [2024-11-20 14:30:37.989997] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:59.058   14:30:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target
00:19:59.622  [2024-11-20 14:30:38.301709] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:19:59.622  [2024-11-20 14:30:38.309592] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:19:59.622  [2024-11-20 14:30:38.309650] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:19:59.622    14:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3
00:19:59.622   14:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:19:59.622   14:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0
00:19:59.622   14:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:59.622   14:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:00.219   14:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.219   14:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:00.219   14:30:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1
00:20:00.219   14:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.219   14:30:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:00.784   14:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:00.785   14:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:00.785   14:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2
00:20:00.785   14:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:00.785   14:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:01.043   14:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:01.043   14:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID)
00:20:01.043   14:30:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3
00:20:01.043   14:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:01.043   14:30:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]'
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']'
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]'
00:20:01.301    14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length
00:20:01.301  ************************************
00:20:01.301  END TEST test_create_multi_ublk
00:20:01.301  ************************************
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']'
00:20:01.301  
00:20:01.301  real	0m4.743s
00:20:01.301  user	0m1.350s
00:20:01.301  sys	0m0.165s
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:01.301   14:30:40 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x
00:20:01.301   14:30:40 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:20:01.301   14:30:40 ublk -- ublk/ublk.sh@147 -- # cleanup
00:20:01.301   14:30:40 ublk -- ublk/ublk.sh@130 -- # killprocess 75680
00:20:01.301   14:30:40 ublk -- common/autotest_common.sh@954 -- # '[' -z 75680 ']'
00:20:01.301   14:30:40 ublk -- common/autotest_common.sh@958 -- # kill -0 75680
00:20:01.558    14:30:40 ublk -- common/autotest_common.sh@959 -- # uname
00:20:01.558   14:30:40 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:01.558    14:30:40 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75680
00:20:01.558   14:30:40 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:01.558   14:30:40 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:01.558   14:30:40 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75680'
00:20:01.558  killing process with pid 75680
00:20:01.558   14:30:40 ublk -- common/autotest_common.sh@973 -- # kill 75680
00:20:01.558   14:30:40 ublk -- common/autotest_common.sh@978 -- # wait 75680
00:20:02.526  [2024-11-20 14:30:41.320896] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:20:02.526  [2024-11-20 14:30:41.320970] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:20:03.899  
00:20:03.899  real	0m29.903s
00:20:03.899  user	0m43.483s
00:20:03.899  sys	0m10.295s
00:20:03.899   14:30:42 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:03.899  ************************************
00:20:03.899  END TEST ublk
00:20:03.899  ************************************
00:20:03.899   14:30:42 ublk -- common/autotest_common.sh@10 -- # set +x
00:20:03.899   14:30:42  -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh
00:20:03.899   14:30:42  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:20:03.899   14:30:42  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:03.899   14:30:42  -- common/autotest_common.sh@10 -- # set +x
00:20:03.899  ************************************
00:20:03.899  START TEST ublk_recovery
00:20:03.899  ************************************
00:20:03.899   14:30:42 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh
00:20:03.899  * Looking for test storage...
00:20:03.899  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk
00:20:03.899    14:30:42 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:20:03.899     14:30:42 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version
00:20:03.899     14:30:42 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:20:03.899    14:30:42 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-:
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-:
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<'
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@345 -- # : 1
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@365 -- # decimal 1
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@353 -- # local d=1
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@355 -- # echo 1
00:20:03.899    14:30:42 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@366 -- # decimal 2
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@353 -- # local d=2
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:03.899     14:30:42 ublk_recovery -- scripts/common.sh@355 -- # echo 2
00:20:03.900    14:30:42 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2
00:20:03.900    14:30:42 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:03.900    14:30:42 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:03.900    14:30:42 ublk_recovery -- scripts/common.sh@368 -- # return 0
00:20:03.900    14:30:42 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:03.900    14:30:42 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:20:03.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.900  		--rc genhtml_branch_coverage=1
00:20:03.900  		--rc genhtml_function_coverage=1
00:20:03.900  		--rc genhtml_legend=1
00:20:03.900  		--rc geninfo_all_blocks=1
00:20:03.900  		--rc geninfo_unexecuted_blocks=1
00:20:03.900  		
00:20:03.900  		'
00:20:03.900    14:30:42 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:20:03.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.900  		--rc genhtml_branch_coverage=1
00:20:03.900  		--rc genhtml_function_coverage=1
00:20:03.900  		--rc genhtml_legend=1
00:20:03.900  		--rc geninfo_all_blocks=1
00:20:03.900  		--rc geninfo_unexecuted_blocks=1
00:20:03.900  		
00:20:03.900  		'
00:20:03.900    14:30:42 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:20:03.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.900  		--rc genhtml_branch_coverage=1
00:20:03.900  		--rc genhtml_function_coverage=1
00:20:03.900  		--rc genhtml_legend=1
00:20:03.900  		--rc geninfo_all_blocks=1
00:20:03.900  		--rc geninfo_unexecuted_blocks=1
00:20:03.900  		
00:20:03.900  		'
00:20:03.900    14:30:42 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:20:03.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.900  		--rc genhtml_branch_coverage=1
00:20:03.900  		--rc genhtml_function_coverage=1
00:20:03.900  		--rc genhtml_legend=1
00:20:03.900  		--rc geninfo_all_blocks=1
00:20:03.900  		--rc geninfo_unexecuted_blocks=1
00:20:03.900  		
00:20:03.900  		'
00:20:03.900   14:30:42 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124
00:20:03.900    14:30:42 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424
00:20:03.900   14:30:42 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv
00:20:03.900  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:03.900   14:30:42 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76101
00:20:03.900   14:30:42 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:03.900   14:30:42 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76101
00:20:03.900   14:30:42 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:20:03.900   14:30:42 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76101 ']'
00:20:03.900   14:30:42 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:03.900   14:30:42 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:03.900   14:30:42 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:03.900   14:30:42 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:03.900   14:30:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:03.900  [2024-11-20 14:30:42.872757] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:20:03.900  [2024-11-20 14:30:42.872927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76101 ]
00:20:04.158  [2024-11-20 14:30:43.055284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:04.416  [2024-11-20 14:30:43.190775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:04.416  [2024-11-20 14:30:43.190786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:05.349   14:30:43 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:05.349   14:30:43 ublk_recovery -- common/autotest_common.sh@868 -- # return 0
00:20:05.349   14:30:43 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target
00:20:05.349   14:30:43 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:05.349   14:30:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:05.350  [2024-11-20 14:30:44.000610] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:05.350  [2024-11-20 14:30:44.003084] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:05.350   14:30:44 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:05.350  malloc0
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:05.350   14:30:44 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:05.350  [2024-11-20 14:30:44.143829] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128
00:20:05.350  [2024-11-20 14:30:44.143990] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1
00:20:05.350  [2024-11-20 14:30:44.144012] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:20:05.350  [2024-11-20 14:30:44.144025] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV
00:20:05.350  [2024-11-20 14:30:44.152697] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed
00:20:05.350  [2024-11-20 14:30:44.152722] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS
00:20:05.350  [2024-11-20 14:30:44.159619] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed
00:20:05.350  [2024-11-20 14:30:44.159805] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV
00:20:05.350  [2024-11-20 14:30:44.174608] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed
00:20:05.350  1
00:20:05.350   14:30:44 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:05.350   14:30:44 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1
00:20:06.283   14:30:45 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76142
00:20:06.283   14:30:45 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60
00:20:06.283   14:30:45 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5
00:20:06.541  fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:20:06.541  fio-3.35
00:20:06.541  Starting 1 process
00:20:11.828   14:30:50 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76101
00:20:11.828   14:30:50 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5
00:20:17.084  /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76101 Killed                  "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk
00:20:17.084   14:30:55 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76242
00:20:17.084   14:30:55 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk
00:20:17.084   14:30:55 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:20:17.084   14:30:55 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76242
00:20:17.084  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:17.084   14:30:55 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76242 ']'
00:20:17.084   14:30:55 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:17.084   14:30:55 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:17.084   14:30:55 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:17.084   14:30:55 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:17.084   14:30:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:17.084  [2024-11-20 14:30:55.303307] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:20:17.084  [2024-11-20 14:30:55.303727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76242 ]
00:20:17.084  [2024-11-20 14:30:55.485385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:20:17.084  [2024-11-20 14:30:55.617099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:20:17.084  [2024-11-20 14:30:55.617111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@868 -- # return 0
00:20:17.650   14:30:56 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:17.650  [2024-11-20 14:30:56.438597] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled
00:20:17.650  [2024-11-20 14:30:56.441039] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:17.650   14:30:56 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:17.650  malloc0
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:17.650   14:30:56 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:20:17.650  [2024-11-20 14:30:56.566847] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0
00:20:17.650  [2024-11-20 14:30:56.566904] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq
00:20:17.650  [2024-11-20 14:30:56.566922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO
00:20:17.650  [2024-11-20 14:30:56.574635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed
00:20:17.650  [2024-11-20 14:30:56.574671] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2
00:20:17.650  [2024-11-20 14:30:56.574686] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda
00:20:17.650  [2024-11-20 14:30:56.574788] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY
00:20:17.650  1
00:20:17.650   14:30:56 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:17.650   14:30:56 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76142
00:20:17.650  [2024-11-20 14:30:56.582602] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed
00:20:17.650  [2024-11-20 14:30:56.589322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY
00:20:17.650  [2024-11-20 14:30:56.596844] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed
00:20:17.650  [2024-11-20 14:30:56.596879] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully
00:21:13.933  
00:21:13.933  fio_test: (groupid=0, jobs=1): err= 0: pid=76145: Wed Nov 20 14:31:45 2024
00:21:13.933    read: IOPS=17.8k, BW=69.3MiB/s (72.7MB/s)(4161MiB/60002msec)
00:21:13.933      slat (nsec): min=1896, max=307939, avg=6495.58, stdev=2871.06
00:21:13.933      clat (usec): min=1060, max=6418.0k, avg=3548.01, stdev=49712.78
00:21:13.933       lat (usec): min=1089, max=6418.0k, avg=3554.50, stdev=49712.77
00:21:13.933      clat percentiles (usec):
00:21:13.933       |  1.00th=[ 2540],  5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2900],
00:21:13.933       | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032],
00:21:13.933       | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3687], 95.00th=[ 4228],
00:21:13.933       | 99.00th=[ 5800], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 8586],
00:21:13.933       | 99.99th=[13304]
00:21:13.933     bw (  KiB/s): min=24432, max=84048, per=100.00%, avg=78975.40, stdev=8251.78, samples=107
00:21:13.933     iops        : min= 6108, max=21012, avg=19743.85, stdev=2062.95, samples=107
00:21:13.933    write: IOPS=17.7k, BW=69.3MiB/s (72.6MB/s)(4157MiB/60002msec); 0 zone resets
00:21:13.933      slat (nsec): min=1968, max=373982, avg=6752.37, stdev=2946.82
00:21:13.933      clat (usec): min=970, max=6418.3k, avg=3650.50, stdev=49735.23
00:21:13.933       lat (usec): min=978, max=6418.3k, avg=3657.26, stdev=49735.22
00:21:13.933      clat percentiles (usec):
00:21:13.933       |  1.00th=[ 2573],  5.00th=[ 2900], 10.00th=[ 2966], 20.00th=[ 3032],
00:21:13.933       | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3130], 60.00th=[ 3163],
00:21:13.933       | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3720], 95.00th=[ 4293],
00:21:13.933       | 99.00th=[ 5800], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 8455],
00:21:13.933       | 99.99th=[13435]
00:21:13.933     bw (  KiB/s): min=24864, max=84056, per=100.00%, avg=78898.02, stdev=8141.66, samples=107
00:21:13.934     iops        : min= 6216, max=21014, avg=19724.50, stdev=2035.41, samples=107
00:21:13.934    lat (usec)   : 1000=0.01%
00:21:13.934    lat (msec)   : 2=0.05%, 4=92.38%, 10=7.53%, 20=0.03%, >=2000=0.01%
00:21:13.934    cpu          : usr=10.37%, sys=22.42%, ctx=73324, majf=0, minf=13
00:21:13.934    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:21:13.934       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:13.934       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:21:13.934       issued rwts: total=1065131,1064213,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:13.934       latency   : target=0, window=0, percentile=100.00%, depth=128
00:21:13.934  
00:21:13.934  Run status group 0 (all jobs):
00:21:13.934     READ: bw=69.3MiB/s (72.7MB/s), 69.3MiB/s-69.3MiB/s (72.7MB/s-72.7MB/s), io=4161MiB (4363MB), run=60002-60002msec
00:21:13.934    WRITE: bw=69.3MiB/s (72.6MB/s), 69.3MiB/s-69.3MiB/s (72.6MB/s-72.6MB/s), io=4157MiB (4359MB), run=60002-60002msec
00:21:13.934  
00:21:13.934  Disk stats (read/write):
00:21:13.934    ublkb1: ios=1062968/1062038, merge=0/0, ticks=3673639/3658912, in_queue=7332551, util=99.92%
00:21:13.934   14:31:45 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:21:13.934  [2024-11-20 14:31:45.441394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV
00:21:13.934  [2024-11-20 14:31:45.474742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed
00:21:13.934  [2024-11-20 14:31:45.475112] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV
00:21:13.934  [2024-11-20 14:31:45.482626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed
00:21:13.934  [2024-11-20 14:31:45.482907] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq
00:21:13.934  [2024-11-20 14:31:45.483054] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.934   14:31:45 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:21:13.934  [2024-11-20 14:31:45.497780] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:21:13.934  [2024-11-20 14:31:45.506603] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:21:13.934  [2024-11-20 14:31:45.506663] ublk_rpc.c:  63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.934   14:31:45 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT
00:21:13.934   14:31:45 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup
00:21:13.934   14:31:45 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76242
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76242 ']'
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76242
00:21:13.934    14:31:45 ublk_recovery -- common/autotest_common.sh@959 -- # uname
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:13.934    14:31:45 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76242
00:21:13.934  killing process with pid 76242
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76242'
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76242
00:21:13.934   14:31:45 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76242
00:21:13.934  [2024-11-20 14:31:47.038588] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown
00:21:13.934  [2024-11-20 14:31:47.038852] ublk.c: 766:_ublk_fini_done: *DEBUG*: 
00:21:13.934  ************************************
00:21:13.934  END TEST ublk_recovery
00:21:13.934  ************************************
00:21:13.934  
00:21:13.934  real	1m5.785s
00:21:13.934  user	1m48.699s
00:21:13.934  sys	0m31.708s
00:21:13.934   14:31:48 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:21:13.934   14:31:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x
00:21:13.934   14:31:48  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:21:13.934   14:31:48  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@260 -- # timing_exit lib
00:21:13.934   14:31:48  -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:13.934   14:31:48  -- common/autotest_common.sh@10 -- # set +x
00:21:13.934   14:31:48  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']'
00:21:13.934   14:31:48  -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:21:13.934   14:31:48  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:21:13.934   14:31:48  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:13.934   14:31:48  -- common/autotest_common.sh@10 -- # set +x
00:21:13.934  ************************************
00:21:13.934  START TEST ftl
00:21:13.934  ************************************
00:21:13.934   14:31:48 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:21:13.934  * Looking for test storage...
00:21:13.934  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:21:13.934     14:31:48 ftl -- common/autotest_common.sh@1693 -- # lcov --version
00:21:13.934     14:31:48 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:21:13.934    14:31:48 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:13.934    14:31:48 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:13.934    14:31:48 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:13.934    14:31:48 ftl -- scripts/common.sh@336 -- # IFS=.-:
00:21:13.934    14:31:48 ftl -- scripts/common.sh@336 -- # read -ra ver1
00:21:13.934    14:31:48 ftl -- scripts/common.sh@337 -- # IFS=.-:
00:21:13.934    14:31:48 ftl -- scripts/common.sh@337 -- # read -ra ver2
00:21:13.934    14:31:48 ftl -- scripts/common.sh@338 -- # local 'op=<'
00:21:13.934    14:31:48 ftl -- scripts/common.sh@340 -- # ver1_l=2
00:21:13.934    14:31:48 ftl -- scripts/common.sh@341 -- # ver2_l=1
00:21:13.934    14:31:48 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:13.934    14:31:48 ftl -- scripts/common.sh@344 -- # case "$op" in
00:21:13.934    14:31:48 ftl -- scripts/common.sh@345 -- # : 1
00:21:13.934    14:31:48 ftl -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:13.934    14:31:48 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:13.934     14:31:48 ftl -- scripts/common.sh@365 -- # decimal 1
00:21:13.934     14:31:48 ftl -- scripts/common.sh@353 -- # local d=1
00:21:13.934     14:31:48 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:13.934     14:31:48 ftl -- scripts/common.sh@355 -- # echo 1
00:21:13.934    14:31:48 ftl -- scripts/common.sh@365 -- # ver1[v]=1
00:21:13.934     14:31:48 ftl -- scripts/common.sh@366 -- # decimal 2
00:21:13.934     14:31:48 ftl -- scripts/common.sh@353 -- # local d=2
00:21:13.934     14:31:48 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:13.934     14:31:48 ftl -- scripts/common.sh@355 -- # echo 2
00:21:13.934    14:31:48 ftl -- scripts/common.sh@366 -- # ver2[v]=2
00:21:13.934    14:31:48 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:13.934    14:31:48 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:13.934    14:31:48 ftl -- scripts/common.sh@368 -- # return 0
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:21:13.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:13.934  		--rc genhtml_branch_coverage=1
00:21:13.934  		--rc genhtml_function_coverage=1
00:21:13.934  		--rc genhtml_legend=1
00:21:13.934  		--rc geninfo_all_blocks=1
00:21:13.934  		--rc geninfo_unexecuted_blocks=1
00:21:13.934  		
00:21:13.934  		'
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:21:13.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:13.934  		--rc genhtml_branch_coverage=1
00:21:13.934  		--rc genhtml_function_coverage=1
00:21:13.934  		--rc genhtml_legend=1
00:21:13.934  		--rc geninfo_all_blocks=1
00:21:13.934  		--rc geninfo_unexecuted_blocks=1
00:21:13.934  		
00:21:13.934  		'
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:21:13.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:13.934  		--rc genhtml_branch_coverage=1
00:21:13.934  		--rc genhtml_function_coverage=1
00:21:13.934  		--rc genhtml_legend=1
00:21:13.934  		--rc geninfo_all_blocks=1
00:21:13.934  		--rc geninfo_unexecuted_blocks=1
00:21:13.934  		
00:21:13.934  		'
00:21:13.934    14:31:48 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:21:13.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:13.934  		--rc genhtml_branch_coverage=1
00:21:13.934  		--rc genhtml_function_coverage=1
00:21:13.934  		--rc genhtml_legend=1
00:21:13.934  		--rc geninfo_all_blocks=1
00:21:13.935  		--rc geninfo_unexecuted_blocks=1
00:21:13.935  		
00:21:13.935  		'
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:21:13.935      14:31:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh
00:21:13.935     14:31:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:21:13.935    14:31:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:21:13.935     14:31:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:21:13.935    14:31:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:21:13.935    14:31:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:13.935    14:31:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:21:13.935    14:31:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:21:13.935    14:31:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:13.935    14:31:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:13.935    14:31:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:21:13.935    14:31:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:21:13.935    14:31:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:13.935    14:31:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:13.935    14:31:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:21:13.935    14:31:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:21:13.935    14:31:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:13.935    14:31:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:13.935    14:31:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:21:13.935    14:31:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:21:13.935    14:31:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:13.935    14:31:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:13.935    14:31:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:13.935    14:31:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:13.935    14:31:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:21:13.935    14:31:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid=
00:21:13.935    14:31:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:13.935    14:31:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED=
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED=
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE=
00:21:13.935   14:31:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:21:13.935  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:21:13.935  0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:13.935  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:13.935  0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:13.935  0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver
00:21:13.935   14:31:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77028
00:21:13.935   14:31:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77028
00:21:13.935   14:31:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc
00:21:13.935   14:31:49 ftl -- common/autotest_common.sh@835 -- # '[' -z 77028 ']'
00:21:13.935   14:31:49 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:13.935   14:31:49 ftl -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:13.935  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:13.935   14:31:49 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:13.935   14:31:49 ftl -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:13.935   14:31:49 ftl -- common/autotest_common.sh@10 -- # set +x
00:21:13.935  [2024-11-20 14:31:49.171253] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:21:13.935  [2024-11-20 14:31:49.171654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77028 ]
00:21:13.935  [2024-11-20 14:31:49.351186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:21:13.935  [2024-11-20 14:31:49.458083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:13.935   14:31:50 ftl -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:13.935   14:31:50 ftl -- common/autotest_common.sh@868 -- # return 0
00:21:13.935   14:31:50 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d
00:21:13.935   14:31:50 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init
00:21:13.935   14:31:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62
00:21:13.935    14:31:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720
00:21:13.935    14:31:52 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs
00:21:13.935    14:31:52 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address'
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@50 -- # break
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']'
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@59 -- # base_size=1310720
00:21:13.935    14:31:52 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address'
00:21:13.935    14:31:52 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@63 -- # break
00:21:13.935   14:31:52 ftl -- ftl/ftl.sh@66 -- # killprocess 77028
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@954 -- # '[' -z 77028 ']'
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@958 -- # kill -0 77028
00:21:13.935    14:31:52 ftl -- common/autotest_common.sh@959 -- # uname
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:13.935    14:31:52 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77028
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:13.935  killing process with pid 77028
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77028'
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@973 -- # kill 77028
00:21:13.935   14:31:52 ftl -- common/autotest_common.sh@978 -- # wait 77028
00:21:16.465   14:31:54 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']'
00:21:16.465   14:31:54 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic
00:21:16.465   14:31:54 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:21:16.465   14:31:54 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:21:16.465   14:31:54 ftl -- common/autotest_common.sh@10 -- # set +x
00:21:16.465  ************************************
00:21:16.465  START TEST ftl_fio_basic
00:21:16.465  ************************************
00:21:16.465   14:31:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic
00:21:16.465  * Looking for test storage...
00:21:16.465  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-:
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-:
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 ))
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:21:16.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:16.465  		--rc genhtml_branch_coverage=1
00:21:16.465  		--rc genhtml_function_coverage=1
00:21:16.465  		--rc genhtml_legend=1
00:21:16.465  		--rc geninfo_all_blocks=1
00:21:16.465  		--rc geninfo_unexecuted_blocks=1
00:21:16.465  		
00:21:16.465  		'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:21:16.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:16.465  		--rc genhtml_branch_coverage=1
00:21:16.465  		--rc genhtml_function_coverage=1
00:21:16.465  		--rc genhtml_legend=1
00:21:16.465  		--rc geninfo_all_blocks=1
00:21:16.465  		--rc geninfo_unexecuted_blocks=1
00:21:16.465  		
00:21:16.465  		'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:21:16.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:16.465  		--rc genhtml_branch_coverage=1
00:21:16.465  		--rc genhtml_function_coverage=1
00:21:16.465  		--rc genhtml_legend=1
00:21:16.465  		--rc geninfo_all_blocks=1
00:21:16.465  		--rc geninfo_unexecuted_blocks=1
00:21:16.465  		
00:21:16.465  		'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:21:16.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:16.465  		--rc genhtml_branch_coverage=1
00:21:16.465  		--rc genhtml_function_coverage=1
00:21:16.465  		--rc genhtml_legend=1
00:21:16.465  		--rc geninfo_all_blocks=1
00:21:16.465  		--rc geninfo_unexecuted_blocks=1
00:21:16.465  		
00:21:16.465  		'
00:21:16.465   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:21:16.465      14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:21:16.465     14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid=
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:16.465    14:31:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128'
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap'
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght'
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128'
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid=
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]]
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']'
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77177
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77177
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77177 ']'
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:16.466  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable
00:21:16.466   14:31:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:21:16.466  [2024-11-20 14:31:55.336859] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:21:16.466  [2024-11-20 14:31:55.337135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77177 ]
00:21:16.724  [2024-11-20 14:31:55.524771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:21:16.724  [2024-11-20 14:31:55.634060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:21:16.724  [2024-11-20 14:31:55.634144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:21:16.724  [2024-11-20 14:31:55.634143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:21:17.657   14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:21:17.657   14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0
00:21:17.657    14:31:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:21:17.657    14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0
00:21:17.657    14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:21:17.657    14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424
00:21:17.657    14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev
00:21:17.657     14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:21:17.915    14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:21:17.915    14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size
00:21:17.915     14:31:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:21:17.915     14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:21:17.915     14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:17.915     14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:17.915     14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:17.915      14:31:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:21:18.174     14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:18.174    {
00:21:18.174      "name": "nvme0n1",
00:21:18.174      "aliases": [
00:21:18.174        "f79007dd-fdcc-4bf2-b711-b57056252d27"
00:21:18.174      ],
00:21:18.174      "product_name": "NVMe disk",
00:21:18.174      "block_size": 4096,
00:21:18.174      "num_blocks": 1310720,
00:21:18.174      "uuid": "f79007dd-fdcc-4bf2-b711-b57056252d27",
00:21:18.174      "numa_id": -1,
00:21:18.174      "assigned_rate_limits": {
00:21:18.174        "rw_ios_per_sec": 0,
00:21:18.174        "rw_mbytes_per_sec": 0,
00:21:18.174        "r_mbytes_per_sec": 0,
00:21:18.174        "w_mbytes_per_sec": 0
00:21:18.174      },
00:21:18.174      "claimed": false,
00:21:18.174      "zoned": false,
00:21:18.174      "supported_io_types": {
00:21:18.174        "read": true,
00:21:18.174        "write": true,
00:21:18.174        "unmap": true,
00:21:18.174        "flush": true,
00:21:18.174        "reset": true,
00:21:18.174        "nvme_admin": true,
00:21:18.174        "nvme_io": true,
00:21:18.174        "nvme_io_md": false,
00:21:18.174        "write_zeroes": true,
00:21:18.174        "zcopy": false,
00:21:18.174        "get_zone_info": false,
00:21:18.174        "zone_management": false,
00:21:18.174        "zone_append": false,
00:21:18.174        "compare": true,
00:21:18.174        "compare_and_write": false,
00:21:18.174        "abort": true,
00:21:18.174        "seek_hole": false,
00:21:18.174        "seek_data": false,
00:21:18.174        "copy": true,
00:21:18.174        "nvme_iov_md": false
00:21:18.174      },
00:21:18.174      "driver_specific": {
00:21:18.174        "nvme": [
00:21:18.174          {
00:21:18.174            "pci_address": "0000:00:11.0",
00:21:18.174            "trid": {
00:21:18.174              "trtype": "PCIe",
00:21:18.174              "traddr": "0000:00:11.0"
00:21:18.174            },
00:21:18.174            "ctrlr_data": {
00:21:18.174              "cntlid": 0,
00:21:18.174              "vendor_id": "0x1b36",
00:21:18.174              "model_number": "QEMU NVMe Ctrl",
00:21:18.174              "serial_number": "12341",
00:21:18.174              "firmware_revision": "8.0.0",
00:21:18.174              "subnqn": "nqn.2019-08.org.qemu:12341",
00:21:18.174              "oacs": {
00:21:18.174                "security": 0,
00:21:18.174                "format": 1,
00:21:18.174                "firmware": 0,
00:21:18.174                "ns_manage": 1
00:21:18.174              },
00:21:18.174              "multi_ctrlr": false,
00:21:18.174              "ana_reporting": false
00:21:18.174            },
00:21:18.174            "vs": {
00:21:18.174              "nvme_version": "1.4"
00:21:18.174            },
00:21:18.174            "ns_data": {
00:21:18.174              "id": 1,
00:21:18.174              "can_share": false
00:21:18.174            }
00:21:18.174          }
00:21:18.174        ],
00:21:18.174        "mp_policy": "active_passive"
00:21:18.174      }
00:21:18.174    }
00:21:18.174  ]'
00:21:18.174      14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:18.174     14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:18.174      14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:18.438     14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720
00:21:18.438     14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:21:18.438     14:31:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120
00:21:18.438    14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120
00:21:18.438    14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:21:18.438    14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols
00:21:18.438     14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:21:18.438     14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:21:18.701    14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores=
00:21:18.701     14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:21:19.266    14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=63bd6bb7-7481-4a78-8f79-f2df3ef03401
00:21:19.266    14:31:57 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 63bd6bb7-7481-4a78-8f79-f2df3ef03401
00:21:19.550   14:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:19.550    14:31:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:19.550    14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0
00:21:19.550    14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:21:19.550    14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:19.550    14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size=
00:21:19.550     14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:19.550     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:19.550     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:19.550     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:19.550     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:19.550      14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:20.116     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:20.116    {
00:21:20.116      "name": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:20.116      "aliases": [
00:21:20.116        "lvs/nvme0n1p0"
00:21:20.116      ],
00:21:20.116      "product_name": "Logical Volume",
00:21:20.116      "block_size": 4096,
00:21:20.116      "num_blocks": 26476544,
00:21:20.116      "uuid": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:20.116      "assigned_rate_limits": {
00:21:20.116        "rw_ios_per_sec": 0,
00:21:20.116        "rw_mbytes_per_sec": 0,
00:21:20.116        "r_mbytes_per_sec": 0,
00:21:20.116        "w_mbytes_per_sec": 0
00:21:20.116      },
00:21:20.116      "claimed": false,
00:21:20.116      "zoned": false,
00:21:20.116      "supported_io_types": {
00:21:20.116        "read": true,
00:21:20.116        "write": true,
00:21:20.116        "unmap": true,
00:21:20.116        "flush": false,
00:21:20.116        "reset": true,
00:21:20.116        "nvme_admin": false,
00:21:20.116        "nvme_io": false,
00:21:20.116        "nvme_io_md": false,
00:21:20.116        "write_zeroes": true,
00:21:20.116        "zcopy": false,
00:21:20.116        "get_zone_info": false,
00:21:20.116        "zone_management": false,
00:21:20.116        "zone_append": false,
00:21:20.116        "compare": false,
00:21:20.116        "compare_and_write": false,
00:21:20.116        "abort": false,
00:21:20.116        "seek_hole": true,
00:21:20.116        "seek_data": true,
00:21:20.116        "copy": false,
00:21:20.116        "nvme_iov_md": false
00:21:20.116      },
00:21:20.116      "driver_specific": {
00:21:20.116        "lvol": {
00:21:20.116          "lvol_store_uuid": "63bd6bb7-7481-4a78-8f79-f2df3ef03401",
00:21:20.116          "base_bdev": "nvme0n1",
00:21:20.116          "thin_provision": true,
00:21:20.116          "num_allocated_clusters": 0,
00:21:20.116          "snapshot": false,
00:21:20.116          "clone": false,
00:21:20.116          "esnap_clone": false
00:21:20.116        }
00:21:20.116      }
00:21:20.116    }
00:21:20.116  ]'
00:21:20.116      14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:20.116     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:20.116      14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:20.116     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:21:20.116     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:21:20.116     14:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:21:20.116    14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171
00:21:20.116    14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev
00:21:20.116     14:31:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:21:20.683    14:31:59 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:21:20.683    14:31:59 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]]
00:21:20.683     14:31:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:20.683     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:20.683     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:20.683     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:20.683     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:20.683      14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:20.941     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:20.941    {
00:21:20.941      "name": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:20.941      "aliases": [
00:21:20.941        "lvs/nvme0n1p0"
00:21:20.941      ],
00:21:20.941      "product_name": "Logical Volume",
00:21:20.941      "block_size": 4096,
00:21:20.941      "num_blocks": 26476544,
00:21:20.941      "uuid": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:20.941      "assigned_rate_limits": {
00:21:20.941        "rw_ios_per_sec": 0,
00:21:20.941        "rw_mbytes_per_sec": 0,
00:21:20.941        "r_mbytes_per_sec": 0,
00:21:20.941        "w_mbytes_per_sec": 0
00:21:20.941      },
00:21:20.941      "claimed": false,
00:21:20.941      "zoned": false,
00:21:20.941      "supported_io_types": {
00:21:20.941        "read": true,
00:21:20.941        "write": true,
00:21:20.941        "unmap": true,
00:21:20.941        "flush": false,
00:21:20.941        "reset": true,
00:21:20.941        "nvme_admin": false,
00:21:20.941        "nvme_io": false,
00:21:20.941        "nvme_io_md": false,
00:21:20.941        "write_zeroes": true,
00:21:20.941        "zcopy": false,
00:21:20.941        "get_zone_info": false,
00:21:20.941        "zone_management": false,
00:21:20.941        "zone_append": false,
00:21:20.941        "compare": false,
00:21:20.941        "compare_and_write": false,
00:21:20.942        "abort": false,
00:21:20.942        "seek_hole": true,
00:21:20.942        "seek_data": true,
00:21:20.942        "copy": false,
00:21:20.942        "nvme_iov_md": false
00:21:20.942      },
00:21:20.942      "driver_specific": {
00:21:20.942        "lvol": {
00:21:20.942          "lvol_store_uuid": "63bd6bb7-7481-4a78-8f79-f2df3ef03401",
00:21:20.942          "base_bdev": "nvme0n1",
00:21:20.942          "thin_provision": true,
00:21:20.942          "num_allocated_clusters": 0,
00:21:20.942          "snapshot": false,
00:21:20.942          "clone": false,
00:21:20.942          "esnap_clone": false
00:21:20.942        }
00:21:20.942      }
00:21:20.942    }
00:21:20.942  ]'
00:21:20.942      14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:20.942     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:20.942      14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:20.942     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:21:20.942     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:21:20.942     14:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:21:20.942    14:31:59 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171
00:21:20.942    14:31:59 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:21:21.507   14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0
00:21:21.507   14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60
00:21:21.507   14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']'
00:21:21.507  /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected
00:21:21.507    14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:21.507    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:21.507    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info
00:21:21.507    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs
00:21:21.507    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb
00:21:21.507     14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6a6af01a-8d07-4a67-a194-ca2920c5d8b5
00:21:21.764    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[
00:21:21.764    {
00:21:21.764      "name": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:21.764      "aliases": [
00:21:21.764        "lvs/nvme0n1p0"
00:21:21.764      ],
00:21:21.764      "product_name": "Logical Volume",
00:21:21.764      "block_size": 4096,
00:21:21.764      "num_blocks": 26476544,
00:21:21.764      "uuid": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:21.764      "assigned_rate_limits": {
00:21:21.764        "rw_ios_per_sec": 0,
00:21:21.764        "rw_mbytes_per_sec": 0,
00:21:21.764        "r_mbytes_per_sec": 0,
00:21:21.765        "w_mbytes_per_sec": 0
00:21:21.765      },
00:21:21.765      "claimed": false,
00:21:21.765      "zoned": false,
00:21:21.765      "supported_io_types": {
00:21:21.765        "read": true,
00:21:21.765        "write": true,
00:21:21.765        "unmap": true,
00:21:21.765        "flush": false,
00:21:21.765        "reset": true,
00:21:21.765        "nvme_admin": false,
00:21:21.765        "nvme_io": false,
00:21:21.765        "nvme_io_md": false,
00:21:21.765        "write_zeroes": true,
00:21:21.765        "zcopy": false,
00:21:21.765        "get_zone_info": false,
00:21:21.765        "zone_management": false,
00:21:21.765        "zone_append": false,
00:21:21.765        "compare": false,
00:21:21.765        "compare_and_write": false,
00:21:21.765        "abort": false,
00:21:21.765        "seek_hole": true,
00:21:21.765        "seek_data": true,
00:21:21.765        "copy": false,
00:21:21.765        "nvme_iov_md": false
00:21:21.765      },
00:21:21.765      "driver_specific": {
00:21:21.765        "lvol": {
00:21:21.765          "lvol_store_uuid": "63bd6bb7-7481-4a78-8f79-f2df3ef03401",
00:21:21.765          "base_bdev": "nvme0n1",
00:21:21.765          "thin_provision": true,
00:21:21.765          "num_allocated_clusters": 0,
00:21:21.765          "snapshot": false,
00:21:21.765          "clone": false,
00:21:21.765          "esnap_clone": false
00:21:21.765        }
00:21:21.765      }
00:21:21.765    }
00:21:21.765  ]'
00:21:21.765     14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:21:21.765    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096
00:21:21.765     14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:21:22.023    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544
00:21:22.023    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:21:22.023    14:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424
00:21:22.023   14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60
00:21:22.023   14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']'
00:21:22.023   14:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6a6af01a-8d07-4a67-a194-ca2920c5d8b5 -c nvc0n1p0 --l2p_dram_limit 60
00:21:22.281  [2024-11-20 14:32:01.084856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.084939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:21:22.282  [2024-11-20 14:32:01.084979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:21:22.282  [2024-11-20 14:32:01.085002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.085144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.085178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:21:22.282  [2024-11-20 14:32:01.085207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.092 ms
00:21:22.282  [2024-11-20 14:32:01.085229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.085291] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:21:22.282  [2024-11-20 14:32:01.086743] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:21:22.282  [2024-11-20 14:32:01.086822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.086847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:21:22.282  [2024-11-20 14:32:01.086874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.531 ms
00:21:22.282  [2024-11-20 14:32:01.086894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.087158] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bbe9a1d0-a22d-431c-a07e-c08dcd36a123
00:21:22.282  [2024-11-20 14:32:01.088603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.088676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:21:22.282  [2024-11-20 14:32:01.088705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.081 ms
00:21:22.282  [2024-11-20 14:32:01.088735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.094250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.094348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:21:22.282  [2024-11-20 14:32:01.094381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.362 ms
00:21:22.282  [2024-11-20 14:32:01.094410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.094640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.094686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:21:22.282  [2024-11-20 14:32:01.094712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.157 ms
00:21:22.282  [2024-11-20 14:32:01.094745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.094923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.094979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:21:22.282  [2024-11-20 14:32:01.095004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:21:22.282  [2024-11-20 14:32:01.095028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.095096] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:21:22.282  [2024-11-20 14:32:01.101627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.101720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:21:22.282  [2024-11-20 14:32:01.101763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.536 ms
00:21:22.282  [2024-11-20 14:32:01.101793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.101902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.101934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:21:22.282  [2024-11-20 14:32:01.101967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.022 ms
00:21:22.282  [2024-11-20 14:32:01.101990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.102088] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:21:22.282  [2024-11-20 14:32:01.102351] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:21:22.282  [2024-11-20 14:32:01.102421] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:21:22.282  [2024-11-20 14:32:01.102450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:21:22.282  [2024-11-20 14:32:01.102481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:21:22.282  [2024-11-20 14:32:01.102504] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:21:22.282  [2024-11-20 14:32:01.102528] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:21:22.282  [2024-11-20 14:32:01.102546] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:21:22.282  [2024-11-20 14:32:01.102588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:21:22.282  [2024-11-20 14:32:01.102612] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:21:22.282  [2024-11-20 14:32:01.102635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.102656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:21:22.282  [2024-11-20 14:32:01.102683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.562 ms
00:21:22.282  [2024-11-20 14:32:01.102702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.102841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.282  [2024-11-20 14:32:01.102867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:21:22.282  [2024-11-20 14:32:01.102893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.084 ms
00:21:22.282  [2024-11-20 14:32:01.102912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.282  [2024-11-20 14:32:01.103079] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:21:22.282  [2024-11-20 14:32:01.103127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:21:22.282  [2024-11-20 14:32:01.103160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:21:22.282  [2024-11-20 14:32:01.103226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:21:22.282  [2024-11-20 14:32:01.103306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:21:22.282  [2024-11-20 14:32:01.103347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:21:22.282  [2024-11-20 14:32:01.103366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:21:22.282  [2024-11-20 14:32:01.103387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:21:22.282  [2024-11-20 14:32:01.103405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:21:22.282  [2024-11-20 14:32:01.103427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:21:22.282  [2024-11-20 14:32:01.103445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:21:22.282  [2024-11-20 14:32:01.103494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:21:22.282  [2024-11-20 14:32:01.103557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:21:22.282  [2024-11-20 14:32:01.103646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:21:22.282  [2024-11-20 14:32:01.103709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:21:22.282  [2024-11-20 14:32:01.103772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:21:22.282  [2024-11-20 14:32:01.103794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:21:22.282  [2024-11-20 14:32:01.103811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:21:22.283  [2024-11-20 14:32:01.103836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:21:22.283  [2024-11-20 14:32:01.103855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:21:22.283  [2024-11-20 14:32:01.103879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:21:22.283  [2024-11-20 14:32:01.103922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:21:22.283  [2024-11-20 14:32:01.103946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:21:22.283  [2024-11-20 14:32:01.103972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:21:22.283  [2024-11-20 14:32:01.103994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:21:22.283  [2024-11-20 14:32:01.104012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:22.283  [2024-11-20 14:32:01.104035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:21:22.283  [2024-11-20 14:32:01.104056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:21:22.283  [2024-11-20 14:32:01.104080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:22.283  [2024-11-20 14:32:01.104098] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:21:22.283  [2024-11-20 14:32:01.104121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:21:22.283  [2024-11-20 14:32:01.104141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:21:22.283  [2024-11-20 14:32:01.104164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:21:22.283  [2024-11-20 14:32:01.104184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:21:22.283  [2024-11-20 14:32:01.104211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:21:22.283  [2024-11-20 14:32:01.104233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:21:22.283  [2024-11-20 14:32:01.104258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:21:22.283  [2024-11-20 14:32:01.104278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:21:22.283  [2024-11-20 14:32:01.104302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:21:22.283  [2024-11-20 14:32:01.104339] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:21:22.283  [2024-11-20 14:32:01.104383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:21:22.283  [2024-11-20 14:32:01.104433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:21:22.283  [2024-11-20 14:32:01.104454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:21:22.283  [2024-11-20 14:32:01.104488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:21:22.283  [2024-11-20 14:32:01.104512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:21:22.283  [2024-11-20 14:32:01.104542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:21:22.283  [2024-11-20 14:32:01.104590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:21:22.283  [2024-11-20 14:32:01.104629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:21:22.283  [2024-11-20 14:32:01.104654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:21:22.283  [2024-11-20 14:32:01.104690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:21:22.283  [2024-11-20 14:32:01.104818] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:21:22.283  [2024-11-20 14:32:01.104901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:21:22.283  [2024-11-20 14:32:01.104962] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:21:22.283  [2024-11-20 14:32:01.104983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:21:22.283  [2024-11-20 14:32:01.105006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:21:22.283  [2024-11-20 14:32:01.105030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:22.283  [2024-11-20 14:32:01.105054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:21:22.283  [2024-11-20 14:32:01.105076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.039 ms
00:21:22.283  [2024-11-20 14:32:01.105100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:22.283  [2024-11-20 14:32:01.105258] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:21:22.283  [2024-11-20 14:32:01.105305] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:21:26.467  [2024-11-20 14:32:04.844983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.845093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:21:26.467  [2024-11-20 14:32:04.845116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3739.758 ms
00:21:26.467  [2024-11-20 14:32:04.845132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.881464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.881537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:21:26.467  [2024-11-20 14:32:04.881559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.027 ms
00:21:26.467  [2024-11-20 14:32:04.881588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.881794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.881821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:21:26.467  [2024-11-20 14:32:04.881835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.075 ms
00:21:26.467  [2024-11-20 14:32:04.881852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.928359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.928432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:21:26.467  [2024-11-20 14:32:04.928457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 46.435 ms
00:21:26.467  [2024-11-20 14:32:04.928474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.928541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.928561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:21:26.467  [2024-11-20 14:32:04.928590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:21:26.467  [2024-11-20 14:32:04.928605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.929047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.929078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:21:26.467  [2024-11-20 14:32:04.929094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.310 ms
00:21:26.467  [2024-11-20 14:32:04.929112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.929280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.929303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:21:26.467  [2024-11-20 14:32:04.929316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.131 ms
00:21:26.467  [2024-11-20 14:32:04.929332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.947900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.947970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:21:26.467  [2024-11-20 14:32:04.947990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.526 ms
00:21:26.467  [2024-11-20 14:32:04.948005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:04.961658] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:21:26.467  [2024-11-20 14:32:04.976074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:04.976173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:21:26.467  [2024-11-20 14:32:04.976198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.891 ms
00:21:26.467  [2024-11-20 14:32:04.976215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.077370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.077452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:21:26.467  [2024-11-20 14:32:05.077481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 101.069 ms
00:21:26.467  [2024-11-20 14:32:05.077494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.077757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.077778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:21:26.467  [2024-11-20 14:32:05.077798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.197 ms
00:21:26.467  [2024-11-20 14:32:05.077811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.111552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.111650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:21:26.467  [2024-11-20 14:32:05.111677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.612 ms
00:21:26.467  [2024-11-20 14:32:05.111691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.144983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.145057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:21:26.467  [2024-11-20 14:32:05.145081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.212 ms
00:21:26.467  [2024-11-20 14:32:05.145094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.145884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.145915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:21:26.467  [2024-11-20 14:32:05.145933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.732 ms
00:21:26.467  [2024-11-20 14:32:05.145946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.248180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.248258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:21:26.467  [2024-11-20 14:32:05.248286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 102.112 ms
00:21:26.467  [2024-11-20 14:32:05.248304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.282605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.282687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:21:26.467  [2024-11-20 14:32:05.282711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.092 ms
00:21:26.467  [2024-11-20 14:32:05.282724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.316357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.316436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:21:26.467  [2024-11-20 14:32:05.316459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.514 ms
00:21:26.467  [2024-11-20 14:32:05.316472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.350039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.350120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:21:26.467  [2024-11-20 14:32:05.350145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.460 ms
00:21:26.467  [2024-11-20 14:32:05.350158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.350267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.350286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:21:26.467  [2024-11-20 14:32:05.350329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:21:26.467  [2024-11-20 14:32:05.350343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.350605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:26.467  [2024-11-20 14:32:05.350635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:21:26.467  [2024-11-20 14:32:05.350652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.088 ms
00:21:26.467  [2024-11-20 14:32:05.350665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:26.467  [2024-11-20 14:32:05.352044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4266.654 ms, result 0
00:21:26.467  {
00:21:26.467    "name": "ftl0",
00:21:26.467    "uuid": "bbe9a1d0-a22d-431c-a07e-c08dcd36a123"
00:21:26.467  }
00:21:26.467   14:32:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0
00:21:26.467   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0
00:21:26.467   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:21:26.467   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i
00:21:26.467   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:21:26.467   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:21:26.468   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:21:26.725   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000
00:21:26.983  [
00:21:26.983    {
00:21:26.983      "name": "ftl0",
00:21:26.983      "aliases": [
00:21:26.983        "bbe9a1d0-a22d-431c-a07e-c08dcd36a123"
00:21:26.983      ],
00:21:26.983      "product_name": "FTL disk",
00:21:26.983      "block_size": 4096,
00:21:26.983      "num_blocks": 20971520,
00:21:26.983      "uuid": "bbe9a1d0-a22d-431c-a07e-c08dcd36a123",
00:21:26.983      "assigned_rate_limits": {
00:21:26.983        "rw_ios_per_sec": 0,
00:21:26.983        "rw_mbytes_per_sec": 0,
00:21:26.983        "r_mbytes_per_sec": 0,
00:21:26.983        "w_mbytes_per_sec": 0
00:21:26.983      },
00:21:26.983      "claimed": false,
00:21:26.983      "zoned": false,
00:21:26.983      "supported_io_types": {
00:21:26.983        "read": true,
00:21:26.983        "write": true,
00:21:26.983        "unmap": true,
00:21:26.983        "flush": true,
00:21:26.983        "reset": false,
00:21:26.983        "nvme_admin": false,
00:21:26.983        "nvme_io": false,
00:21:26.983        "nvme_io_md": false,
00:21:26.983        "write_zeroes": true,
00:21:26.983        "zcopy": false,
00:21:26.983        "get_zone_info": false,
00:21:26.983        "zone_management": false,
00:21:26.983        "zone_append": false,
00:21:26.983        "compare": false,
00:21:26.983        "compare_and_write": false,
00:21:26.983        "abort": false,
00:21:26.983        "seek_hole": false,
00:21:26.983        "seek_data": false,
00:21:26.983        "copy": false,
00:21:26.983        "nvme_iov_md": false
00:21:26.983      },
00:21:26.983      "driver_specific": {
00:21:26.983        "ftl": {
00:21:26.983          "base_bdev": "6a6af01a-8d07-4a67-a194-ca2920c5d8b5",
00:21:26.983          "cache": "nvc0n1p0"
00:21:26.983        }
00:21:26.983      }
00:21:26.983    }
00:21:26.983  ]
00:21:26.983   14:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0
00:21:26.983   14:32:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": ['
00:21:26.983   14:32:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:21:27.241   14:32:06 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}'
00:21:27.241   14:32:06 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:21:27.499  [2024-11-20 14:32:06.461285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.499  [2024-11-20 14:32:06.461375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:21:27.499  [2024-11-20 14:32:06.461409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:21:27.499  [2024-11-20 14:32:06.461433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.499  [2024-11-20 14:32:06.461538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:21:27.499  [2024-11-20 14:32:06.466603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.499  [2024-11-20 14:32:06.466667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:21:27.499  [2024-11-20 14:32:06.466701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.994 ms
00:21:27.499  [2024-11-20 14:32:06.466722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.499  [2024-11-20 14:32:06.467404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.499  [2024-11-20 14:32:06.467452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:21:27.499  [2024-11-20 14:32:06.467480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.601 ms
00:21:27.499  [2024-11-20 14:32:06.467499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.499  [2024-11-20 14:32:06.471323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.499  [2024-11-20 14:32:06.471391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:21:27.499  [2024-11-20 14:32:06.471422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.773 ms
00:21:27.499  [2024-11-20 14:32:06.471440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.499  [2024-11-20 14:32:06.479727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.499  [2024-11-20 14:32:06.479815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:21:27.499  [2024-11-20 14:32:06.479850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.214 ms
00:21:27.499  [2024-11-20 14:32:06.479870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.528291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.528399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:21:27.759  [2024-11-20 14:32:06.528435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 48.184 ms
00:21:27.759  [2024-11-20 14:32:06.528454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.557713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.558092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:21:27.759  [2024-11-20 14:32:06.558154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.067 ms
00:21:27.759  [2024-11-20 14:32:06.558176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.558563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.558634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:21:27.759  [2024-11-20 14:32:06.558660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.268 ms
00:21:27.759  [2024-11-20 14:32:06.558678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.607419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.607529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:21:27.759  [2024-11-20 14:32:06.607593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 48.677 ms
00:21:27.759  [2024-11-20 14:32:06.607619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.652340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.652422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:21:27.759  [2024-11-20 14:32:06.652448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 44.604 ms
00:21:27.759  [2024-11-20 14:32:06.652460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.684358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.684438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:21:27.759  [2024-11-20 14:32:06.684463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.791 ms
00:21:27.759  [2024-11-20 14:32:06.684475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.716390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.759  [2024-11-20 14:32:06.716695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:21:27.759  [2024-11-20 14:32:06.716736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.672 ms
00:21:27.759  [2024-11-20 14:32:06.716758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.759  [2024-11-20 14:32:06.716854] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:21:27.759  [2024-11-20 14:32:06.716883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.716988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.759  [2024-11-20 14:32:06.717341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.717986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:21:27.760  [2024-11-20 14:32:06.718360] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:21:27.760  [2024-11-20 14:32:06.718374] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         bbe9a1d0-a22d-431c-a07e-c08dcd36a123
00:21:27.760  [2024-11-20 14:32:06.718386] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:21:27.760  [2024-11-20 14:32:06.718401] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:21:27.760  [2024-11-20 14:32:06.718412] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:21:27.760  [2024-11-20 14:32:06.718430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:21:27.760  [2024-11-20 14:32:06.718440] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:21:27.760  [2024-11-20 14:32:06.718454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:21:27.760  [2024-11-20 14:32:06.718464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:21:27.760  [2024-11-20 14:32:06.718476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:21:27.760  [2024-11-20 14:32:06.718486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:21:27.760  [2024-11-20 14:32:06.718500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.760  [2024-11-20 14:32:06.718511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:21:27.760  [2024-11-20 14:32:06.718526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.652 ms
00:21:27.760  [2024-11-20 14:32:06.718537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.760  [2024-11-20 14:32:06.735751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.760  [2024-11-20 14:32:06.736025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:21:27.760  [2024-11-20 14:32:06.736062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.075 ms
00:21:27.760  [2024-11-20 14:32:06.736077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:27.760  [2024-11-20 14:32:06.736546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:21:27.760  [2024-11-20 14:32:06.736595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:21:27.760  [2024-11-20 14:32:06.736616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.408 ms
00:21:27.760  [2024-11-20 14:32:06.736628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.795811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.795885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:21:28.018  [2024-11-20 14:32:06.795908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.795921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.796010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.796025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:21:28.018  [2024-11-20 14:32:06.796040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.796051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.796231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.796255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:21:28.018  [2024-11-20 14:32:06.796270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.796282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.796320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.796335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:21:28.018  [2024-11-20 14:32:06.796348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.796359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.908610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.908693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:21:28.018  [2024-11-20 14:32:06.908718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.908730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.995471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.995751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:21:28.018  [2024-11-20 14:32:06.995789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.995803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.995950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.995969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:21:28.018  [2024-11-20 14:32:06.995989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.996000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.996092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.996110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:21:28.018  [2024-11-20 14:32:06.996125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.996136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.996288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.996309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:21:28.018  [2024-11-20 14:32:06.996324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.996338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.996422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.996441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:21:28.018  [2024-11-20 14:32:06.996456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.996468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.996529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.996545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:21:28.018  [2024-11-20 14:32:06.996559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.996587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.996668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:21:28.018  [2024-11-20 14:32:06.996685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:21:28.018  [2024-11-20 14:32:06.996700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:21:28.018  [2024-11-20 14:32:06.996711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:21:28.018  [2024-11-20 14:32:06.996923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.630 ms, result 0
00:21:28.276  true
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77177
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77177 ']'
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77177
00:21:28.276    14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:21:28.276    14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77177
00:21:28.276  killing process with pid 77177
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77177'
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77177
00:21:28.276   14:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77177
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:21:33.536    14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:21:33.536    14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:21:33.536    14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:21:33.536   14:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio
00:21:33.536  test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1
00:21:33.536  fio-3.35
00:21:33.536  Starting 1 thread
00:21:38.823  
00:21:38.823  test: (groupid=0, jobs=1): err= 0: pid=77411: Wed Nov 20 14:32:17 2024
00:21:38.823    read: IOPS=1040, BW=69.1MiB/s (72.4MB/s)(255MiB/3684msec)
00:21:38.823      slat (nsec): min=5848, max=65525, avg=8351.02, stdev=4390.28
00:21:38.823      clat (usec): min=269, max=710, avg=428.88, stdev=55.61
00:21:38.823       lat (usec): min=288, max=727, avg=437.23, stdev=55.96
00:21:38.823      clat percentiles (usec):
00:21:38.823       |  1.00th=[  343],  5.00th=[  359], 10.00th=[  367], 20.00th=[  375],
00:21:38.823       | 30.00th=[  383], 40.00th=[  404], 50.00th=[  437], 60.00th=[  445],
00:21:38.823       | 70.00th=[  449], 80.00th=[  465], 90.00th=[  510], 95.00th=[  529],
00:21:38.823       | 99.00th=[  578], 99.50th=[  594], 99.90th=[  701], 99.95th=[  709],
00:21:38.823       | 99.99th=[  709]
00:21:38.823    write: IOPS=1047, BW=69.6MiB/s (73.0MB/s)(256MiB/3680msec); 0 zone resets
00:21:38.823      slat (usec): min=20, max=142, avg=26.18, stdev= 7.46
00:21:38.823      clat (usec): min=356, max=1237, avg=479.86, stdev=62.17
00:21:38.823       lat (usec): min=391, max=1259, avg=506.03, stdev=61.94
00:21:38.823      clat percentiles (usec):
00:21:38.823       |  1.00th=[  379],  5.00th=[  392], 10.00th=[  400], 20.00th=[  420],
00:21:38.823       | 30.00th=[  457], 40.00th=[  469], 50.00th=[  474], 60.00th=[  482],
00:21:38.823       | 70.00th=[  498], 80.00th=[  537], 90.00th=[  553], 95.00th=[  594],
00:21:38.823       | 99.00th=[  660], 99.50th=[  685], 99.90th=[  750], 99.95th=[ 1029],
00:21:38.823       | 99.99th=[ 1237]
00:21:38.823     bw (  KiB/s): min=67864, max=74528, per=100.00%, avg=71574.86, stdev=2051.62, samples=7
00:21:38.823     iops        : min=  998, max= 1096, avg=1052.57, stdev=30.17, samples=7
00:21:38.823    lat (usec)   : 500=79.23%, 750=20.72%, 1000=0.03%
00:21:38.823    lat (msec)   : 2=0.03%
00:21:38.823    cpu          : usr=98.81%, sys=0.35%, ctx=18, majf=0, minf=1169
00:21:38.823    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:21:38.823       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:38.823       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:21:38.823       issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0
00:21:38.823       latency   : target=0, window=0, percentile=100.00%, depth=1
00:21:38.823  
00:21:38.823  Run status group 0 (all jobs):
00:21:38.823     READ: bw=69.1MiB/s (72.4MB/s), 69.1MiB/s-69.1MiB/s (72.4MB/s-72.4MB/s), io=255MiB (267MB), run=3684-3684msec
00:21:38.823    WRITE: bw=69.6MiB/s (73.0MB/s), 69.6MiB/s-69.6MiB/s (73.0MB/s-73.0MB/s), io=256MiB (269MB), run=3680-3680msec
00:21:39.759  -----------------------------------------------------
00:21:39.759  Suppressions used:
00:21:39.759    count      bytes template
00:21:39.759        1          5 /usr/src/fio/parse.c
00:21:39.759        1          8 libtcmalloc_minimal.so
00:21:39.759        1        904 libcrypto.so
00:21:39.759  -----------------------------------------------------
00:21:39.759  
00:21:39.759   14:32:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify
00:21:39.759   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:21:39.759   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:21:40.016    14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:21:40.016    14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:21:40.016    14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:21:40.016   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:21:40.017   14:32:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio
00:21:40.275  first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:21:40.275  second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:21:40.275  fio-3.35
00:21:40.275  Starting 2 threads
00:22:12.335  
00:22:12.335  first_half: (groupid=0, jobs=1): err= 0: pid=77512: Wed Nov 20 14:32:50 2024
00:22:12.335    read: IOPS=2192, BW=8770KiB/s (8981kB/s)(256MiB/29863msec)
00:22:12.335      slat (nsec): min=4651, max=83788, avg=8067.63, stdev=2415.73
00:22:12.335      clat (usec): min=718, max=367533, avg=49073.34, stdev=31812.41
00:22:12.335       lat (usec): min=724, max=367542, avg=49081.41, stdev=31812.73
00:22:12.335      clat percentiles (msec):
00:22:12.335       |  1.00th=[   13],  5.00th=[   38], 10.00th=[   39], 20.00th=[   39],
00:22:12.335       | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[   44],
00:22:12.335       | 70.00th=[   45], 80.00th=[   48], 90.00th=[   55], 95.00th=[   96],
00:22:12.335       | 99.00th=[  220], 99.50th=[  234], 99.90th=[  279], 99.95th=[  330],
00:22:12.335       | 99.99th=[  363]
00:22:12.335    write: IOPS=2198, BW=8793KiB/s (9004kB/s)(256MiB/29813msec); 0 zone resets
00:22:12.335      slat (usec): min=6, max=346, avg= 9.45, stdev= 5.43
00:22:12.335      clat (usec): min=421, max=57945, avg=9259.45, stdev=9108.43
00:22:12.335       lat (usec): min=430, max=57953, avg=9268.90, stdev=9108.52
00:22:12.335      clat percentiles (usec):
00:22:12.335       |  1.00th=[ 1205],  5.00th=[ 1631], 10.00th=[ 2024], 20.00th=[ 3654],
00:22:12.335       | 30.00th=[ 4883], 40.00th=[ 5997], 50.00th=[ 7177], 60.00th=[ 8094],
00:22:12.335       | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[17695], 95.00th=[27132],
00:22:12.335       | 99.00th=[49546], 99.50th=[51119], 99.90th=[54789], 99.95th=[56361],
00:22:12.335       | 99.99th=[57410]
00:22:12.335     bw (  KiB/s): min= 4512, max=40024, per=100.00%, avg=22638.52, stdev=9518.48, samples=23
00:22:12.335     iops        : min= 1128, max=10006, avg=5659.61, stdev=2379.61, samples=23
00:22:12.335    lat (usec)   : 500=0.01%, 750=0.04%, 1000=0.11%
00:22:12.335    lat (msec)   : 2=4.71%, 4=6.51%, 10=26.72%, 20=9.54%, 50=43.48%
00:22:12.335    lat (msec)   : 100=6.45%, 250=2.32%, 500=0.10%
00:22:12.335    cpu          : usr=99.02%, sys=0.18%, ctx=45, majf=0, minf=5530
00:22:12.335    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:22:12.335       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:12.335       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:12.335       issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:12.335       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:12.335  second_half: (groupid=0, jobs=1): err= 0: pid=77513: Wed Nov 20 14:32:50 2024
00:22:12.335    read: IOPS=2212, BW=8851KiB/s (9063kB/s)(256MiB/29596msec)
00:22:12.335      slat (nsec): min=4668, max=51140, avg=7904.51, stdev=2270.67
00:22:12.335      clat (msec): min=18, max=281, avg=49.69, stdev=28.98
00:22:12.335       lat (msec): min=18, max=281, avg=49.69, stdev=28.98
00:22:12.335      clat percentiles (msec):
00:22:12.335       |  1.00th=[   37],  5.00th=[   39], 10.00th=[   39], 20.00th=[   39],
00:22:12.335       | 30.00th=[   40], 40.00th=[   40], 50.00th=[   42], 60.00th=[   44],
00:22:12.335       | 70.00th=[   46], 80.00th=[   50], 90.00th=[   58], 95.00th=[   92],
00:22:12.335       | 99.00th=[  207], 99.50th=[  222], 99.90th=[  264], 99.95th=[  271],
00:22:12.335       | 99.99th=[  275]
00:22:12.335    write: IOPS=2226, BW=8905KiB/s (9118kB/s)(256MiB/29439msec); 0 zone resets
00:22:12.335      slat (usec): min=5, max=212, avg= 8.98, stdev= 5.01
00:22:12.335      clat (usec): min=476, max=57112, avg=8127.22, stdev=5230.16
00:22:12.335       lat (usec): min=493, max=57122, avg=8136.20, stdev=5230.39
00:22:12.335      clat percentiles (usec):
00:22:12.335       |  1.00th=[ 1369],  5.00th=[ 2180], 10.00th=[ 3261], 20.00th=[ 4490],
00:22:12.335       | 30.00th=[ 5473], 40.00th=[ 6325], 50.00th=[ 7111], 60.00th=[ 7767],
00:22:12.335       | 70.00th=[ 8455], 80.00th=[10028], 90.00th=[15664], 95.00th=[19006],
00:22:12.335       | 99.00th=[24249], 99.50th=[32637], 99.90th=[43779], 99.95th=[50594],
00:22:12.335       | 99.99th=[55837]
00:22:12.335     bw (  KiB/s): min= 2016, max=41920, per=100.00%, avg=22795.13, stdev=11921.87, samples=23
00:22:12.335     iops        : min=  504, max=10480, avg=5698.78, stdev=2980.47, samples=23
00:22:12.335    lat (usec)   : 500=0.01%, 750=0.05%, 1000=0.11%
00:22:12.335    lat (msec)   : 2=1.92%, 4=5.63%, 10=32.35%, 20=8.08%, 50=42.54%
00:22:12.335    lat (msec)   : 100=7.01%, 250=2.21%, 500=0.08%
00:22:12.335    cpu          : usr=99.09%, sys=0.12%, ctx=62, majf=0, minf=5581
00:22:12.335    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:22:12.335       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:12.335       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:12.335       issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:12.335       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:12.335  
00:22:12.335  Run status group 0 (all jobs):
00:22:12.335     READ: bw=17.1MiB/s (18.0MB/s), 8770KiB/s-8851KiB/s (8981kB/s-9063kB/s), io=512MiB (536MB), run=29596-29863msec
00:22:12.335    WRITE: bw=17.2MiB/s (18.0MB/s), 8793KiB/s-8905KiB/s (9004kB/s-9118kB/s), io=512MiB (537MB), run=29439-29813msec
00:22:13.793  -----------------------------------------------------
00:22:13.793  Suppressions used:
00:22:13.793    count      bytes template
00:22:13.793        2         10 /usr/src/fio/parse.c
00:22:13.793        2        192 /usr/src/fio/iolog.c
00:22:13.793        1          8 libtcmalloc_minimal.so
00:22:13.793        1        904 libcrypto.so
00:22:13.793  -----------------------------------------------------
00:22:13.793  
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests}
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib=
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:22:14.051    14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:22:14.051    14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:22:14.051    14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]]
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:22:14.051   14:32:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio
00:22:14.309  test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:22:14.309  fio-3.35
00:22:14.309  Starting 1 thread
00:22:32.420  
00:22:32.420  test: (groupid=0, jobs=1): err= 0: pid=77881: Wed Nov 20 14:33:11 2024
00:22:32.420    read: IOPS=6036, BW=23.6MiB/s (24.7MB/s)(255MiB/10802msec)
00:22:32.420      slat (nsec): min=4653, max=70458, avg=7764.78, stdev=2692.09
00:22:32.420      clat (usec): min=822, max=41097, avg=21192.18, stdev=2239.82
00:22:32.420       lat (usec): min=828, max=41102, avg=21199.95, stdev=2240.14
00:22:32.420      clat percentiles (usec):
00:22:32.420       |  1.00th=[19006],  5.00th=[19530], 10.00th=[19530], 20.00th=[19792],
00:22:32.420       | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[20841],
00:22:32.420       | 70.00th=[21103], 80.00th=[22152], 90.00th=[24249], 95.00th=[26346],
00:22:32.420       | 99.00th=[28705], 99.50th=[30016], 99.90th=[32113], 99.95th=[36439],
00:22:32.420       | 99.99th=[40633]
00:22:32.420    write: IOPS=10.8k, BW=42.4MiB/s (44.4MB/s)(256MiB/6043msec); 0 zone resets
00:22:32.420      slat (usec): min=5, max=666, avg=10.26, stdev= 7.17
00:22:32.420      clat (usec): min=676, max=78006, avg=11738.77, stdev=15517.88
00:22:32.420       lat (usec): min=683, max=78017, avg=11749.03, stdev=15517.93
00:22:32.420      clat percentiles (usec):
00:22:32.420       |  1.00th=[  988],  5.00th=[ 1172], 10.00th=[ 1319], 20.00th=[ 1565],
00:22:32.420       | 30.00th=[ 1827], 40.00th=[ 2409], 50.00th=[ 7242], 60.00th=[ 8356],
00:22:32.420       | 70.00th=[ 9503], 80.00th=[11338], 90.00th=[42206], 95.00th=[50594],
00:22:32.421       | 99.00th=[56361], 99.50th=[57410], 99.90th=[61604], 99.95th=[62653],
00:22:32.421       | 99.99th=[71828]
00:22:32.421     bw (  KiB/s): min= 1976, max=63584, per=92.97%, avg=40329.85, stdev=15441.77, samples=13
00:22:32.421     iops        : min=  494, max=15896, avg=10082.46, stdev=3860.40, samples=13
00:22:32.421    lat (usec)   : 750=0.01%, 1000=0.61%
00:22:32.421    lat (msec)   : 2=16.70%, 4=3.64%, 10=15.71%, 20=19.51%, 50=41.18%
00:22:32.421    lat (msec)   : 100=2.64%
00:22:32.421    cpu          : usr=98.73%, sys=0.28%, ctx=23, majf=0, minf=5565
00:22:32.421    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
00:22:32.421       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:22:32.421       complete  : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1%
00:22:32.421       issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:22:32.421       latency   : target=0, window=0, percentile=100.00%, depth=128
00:22:32.421  
00:22:32.421  Run status group 0 (all jobs):
00:22:32.421     READ: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=255MiB (267MB), run=10802-10802msec
00:22:32.421    WRITE: bw=42.4MiB/s (44.4MB/s), 42.4MiB/s-42.4MiB/s (44.4MB/s-44.4MB/s), io=256MiB (268MB), run=6043-6043msec
00:22:34.321  -----------------------------------------------------
00:22:34.321  Suppressions used:
00:22:34.321    count      bytes template
00:22:34.321        1          5 /usr/src/fio/parse.c
00:22:34.321        2        192 /usr/src/fio/iolog.c
00:22:34.321        1          8 libtcmalloc_minimal.so
00:22:34.321        1        904 libcrypto.so
00:22:34.321  -----------------------------------------------------
00:22:34.321  
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:22:34.321  Remove shared memory files
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58272 /dev/shm/spdk_tgt_trace.pid76101
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f
00:22:34.321  ************************************
00:22:34.321  END TEST ftl_fio_basic
00:22:34.321  ************************************
00:22:34.321  
00:22:34.321  real	1m18.250s
00:22:34.321  user	2m57.540s
00:22:34.321  sys	0m3.954s
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:34.321   14:33:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x
00:22:34.321   14:33:13 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0
00:22:34.321   14:33:13 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:22:34.321   14:33:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:34.321   14:33:13 ftl -- common/autotest_common.sh@10 -- # set +x
00:22:34.321  ************************************
00:22:34.321  START TEST ftl_bdevperf
00:22:34.321  ************************************
00:22:34.321   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0
00:22:34.580  * Looking for test storage...
00:22:34.580  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:22:34.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:34.580  		--rc genhtml_branch_coverage=1
00:22:34.580  		--rc genhtml_function_coverage=1
00:22:34.580  		--rc genhtml_legend=1
00:22:34.580  		--rc geninfo_all_blocks=1
00:22:34.580  		--rc geninfo_unexecuted_blocks=1
00:22:34.580  		
00:22:34.580  		'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:22:34.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:34.580  		--rc genhtml_branch_coverage=1
00:22:34.580  		--rc genhtml_function_coverage=1
00:22:34.580  		--rc genhtml_legend=1
00:22:34.580  		--rc geninfo_all_blocks=1
00:22:34.580  		--rc geninfo_unexecuted_blocks=1
00:22:34.580  		
00:22:34.580  		'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:22:34.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:34.580  		--rc genhtml_branch_coverage=1
00:22:34.580  		--rc genhtml_function_coverage=1
00:22:34.580  		--rc genhtml_legend=1
00:22:34.580  		--rc geninfo_all_blocks=1
00:22:34.580  		--rc geninfo_unexecuted_blocks=1
00:22:34.580  		
00:22:34.580  		'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:22:34.580  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:34.580  		--rc genhtml_branch_coverage=1
00:22:34.580  		--rc genhtml_function_coverage=1
00:22:34.580  		--rc genhtml_legend=1
00:22:34.580  		--rc geninfo_all_blocks=1
00:22:34.580  		--rc geninfo_unexecuted_blocks=1
00:22:34.580  		
00:22:34.580  		'
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:22:34.580      14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:22:34.580     14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid=
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:22:34.580    14:33:13 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append=
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78151
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78151
00:22:34.580   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78151 ']'
00:22:34.581   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:34.581   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:34.581   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:34.581  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:34.581   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:34.581   14:33:13 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:22:34.838  [2024-11-20 14:33:13.632449] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:22:34.838  [2024-11-20 14:33:13.632995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78151 ]
00:22:35.096  [2024-11-20 14:33:13.831243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:35.096  [2024-11-20 14:33:13.937693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:35.661   14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:35.661   14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:22:35.661    14:33:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:22:35.661    14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0
00:22:35.661    14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:22:35.661    14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424
00:22:35.661    14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev
00:22:35.661     14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:22:36.226    14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:22:36.226    14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size
00:22:36.226     14:33:14 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:22:36.226     14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:22:36.226     14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:36.226     14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:36.226     14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:36.226      14:33:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:22:36.485     14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:36.485    {
00:22:36.485      "name": "nvme0n1",
00:22:36.485      "aliases": [
00:22:36.485        "84535072-1626-4698-9c1d-68b4be70a828"
00:22:36.485      ],
00:22:36.485      "product_name": "NVMe disk",
00:22:36.485      "block_size": 4096,
00:22:36.485      "num_blocks": 1310720,
00:22:36.485      "uuid": "84535072-1626-4698-9c1d-68b4be70a828",
00:22:36.485      "numa_id": -1,
00:22:36.485      "assigned_rate_limits": {
00:22:36.485        "rw_ios_per_sec": 0,
00:22:36.485        "rw_mbytes_per_sec": 0,
00:22:36.485        "r_mbytes_per_sec": 0,
00:22:36.485        "w_mbytes_per_sec": 0
00:22:36.485      },
00:22:36.485      "claimed": true,
00:22:36.485      "claim_type": "read_many_write_one",
00:22:36.485      "zoned": false,
00:22:36.485      "supported_io_types": {
00:22:36.485        "read": true,
00:22:36.485        "write": true,
00:22:36.485        "unmap": true,
00:22:36.485        "flush": true,
00:22:36.485        "reset": true,
00:22:36.485        "nvme_admin": true,
00:22:36.485        "nvme_io": true,
00:22:36.485        "nvme_io_md": false,
00:22:36.485        "write_zeroes": true,
00:22:36.485        "zcopy": false,
00:22:36.485        "get_zone_info": false,
00:22:36.485        "zone_management": false,
00:22:36.485        "zone_append": false,
00:22:36.485        "compare": true,
00:22:36.485        "compare_and_write": false,
00:22:36.485        "abort": true,
00:22:36.485        "seek_hole": false,
00:22:36.485        "seek_data": false,
00:22:36.485        "copy": true,
00:22:36.485        "nvme_iov_md": false
00:22:36.485      },
00:22:36.485      "driver_specific": {
00:22:36.485        "nvme": [
00:22:36.485          {
00:22:36.485            "pci_address": "0000:00:11.0",
00:22:36.485            "trid": {
00:22:36.485              "trtype": "PCIe",
00:22:36.485              "traddr": "0000:00:11.0"
00:22:36.485            },
00:22:36.485            "ctrlr_data": {
00:22:36.485              "cntlid": 0,
00:22:36.485              "vendor_id": "0x1b36",
00:22:36.485              "model_number": "QEMU NVMe Ctrl",
00:22:36.485              "serial_number": "12341",
00:22:36.485              "firmware_revision": "8.0.0",
00:22:36.485              "subnqn": "nqn.2019-08.org.qemu:12341",
00:22:36.485              "oacs": {
00:22:36.485                "security": 0,
00:22:36.485                "format": 1,
00:22:36.485                "firmware": 0,
00:22:36.485                "ns_manage": 1
00:22:36.485              },
00:22:36.485              "multi_ctrlr": false,
00:22:36.485              "ana_reporting": false
00:22:36.485            },
00:22:36.485            "vs": {
00:22:36.485              "nvme_version": "1.4"
00:22:36.485            },
00:22:36.485            "ns_data": {
00:22:36.485              "id": 1,
00:22:36.485              "can_share": false
00:22:36.485            }
00:22:36.485          }
00:22:36.485        ],
00:22:36.485        "mp_policy": "active_passive"
00:22:36.485      }
00:22:36.485    }
00:22:36.485  ]'
00:22:36.485      14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:36.485     14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:36.485      14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:36.744     14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720
00:22:36.744     14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:22:36.744     14:33:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120
00:22:36.744    14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120
00:22:36.744    14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:22:36.744    14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols
00:22:36.744     14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:22:36.744     14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:22:37.005    14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=63bd6bb7-7481-4a78-8f79-f2df3ef03401
00:22:37.005    14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores
00:22:37.005    14:33:15 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63bd6bb7-7481-4a78-8f79-f2df3ef03401
00:22:37.265     14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:22:37.831    14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb
00:22:37.831    14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb
00:22:38.089   14:33:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:38.089    14:33:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:38.089    14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0
00:22:38.089    14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:22:38.089    14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:38.089    14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size=
00:22:38.089     14:33:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:38.089     14:33:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:38.089     14:33:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:38.089     14:33:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:38.089     14:33:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:38.089      14:33:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:38.348     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:38.348    {
00:22:38.348      "name": "212c9536-b0b5-47c4-9529-cdd1bff760d9",
00:22:38.348      "aliases": [
00:22:38.348        "lvs/nvme0n1p0"
00:22:38.348      ],
00:22:38.348      "product_name": "Logical Volume",
00:22:38.348      "block_size": 4096,
00:22:38.348      "num_blocks": 26476544,
00:22:38.348      "uuid": "212c9536-b0b5-47c4-9529-cdd1bff760d9",
00:22:38.348      "assigned_rate_limits": {
00:22:38.348        "rw_ios_per_sec": 0,
00:22:38.348        "rw_mbytes_per_sec": 0,
00:22:38.348        "r_mbytes_per_sec": 0,
00:22:38.348        "w_mbytes_per_sec": 0
00:22:38.348      },
00:22:38.348      "claimed": false,
00:22:38.348      "zoned": false,
00:22:38.348      "supported_io_types": {
00:22:38.348        "read": true,
00:22:38.348        "write": true,
00:22:38.348        "unmap": true,
00:22:38.348        "flush": false,
00:22:38.348        "reset": true,
00:22:38.348        "nvme_admin": false,
00:22:38.348        "nvme_io": false,
00:22:38.348        "nvme_io_md": false,
00:22:38.348        "write_zeroes": true,
00:22:38.348        "zcopy": false,
00:22:38.348        "get_zone_info": false,
00:22:38.348        "zone_management": false,
00:22:38.348        "zone_append": false,
00:22:38.348        "compare": false,
00:22:38.348        "compare_and_write": false,
00:22:38.348        "abort": false,
00:22:38.348        "seek_hole": true,
00:22:38.348        "seek_data": true,
00:22:38.348        "copy": false,
00:22:38.348        "nvme_iov_md": false
00:22:38.348      },
00:22:38.348      "driver_specific": {
00:22:38.348        "lvol": {
00:22:38.348          "lvol_store_uuid": "3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb",
00:22:38.348          "base_bdev": "nvme0n1",
00:22:38.348          "thin_provision": true,
00:22:38.348          "num_allocated_clusters": 0,
00:22:38.348          "snapshot": false,
00:22:38.348          "clone": false,
00:22:38.348          "esnap_clone": false
00:22:38.348        }
00:22:38.348      }
00:22:38.348    }
00:22:38.348  ]'
00:22:38.348      14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:38.606     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:38.606      14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:38.606     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:22:38.606     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:22:38.606     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:22:38.606    14:33:17 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171
00:22:38.606    14:33:17 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev
00:22:38.606     14:33:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:22:39.173    14:33:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:22:39.173    14:33:17 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]]
00:22:39.173     14:33:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:39.173     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:39.173     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:39.173     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:39.173     14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:39.173      14:33:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:39.431     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:39.431    {
00:22:39.431      "name": "212c9536-b0b5-47c4-9529-cdd1bff760d9",
00:22:39.431      "aliases": [
00:22:39.431        "lvs/nvme0n1p0"
00:22:39.431      ],
00:22:39.431      "product_name": "Logical Volume",
00:22:39.431      "block_size": 4096,
00:22:39.431      "num_blocks": 26476544,
00:22:39.432      "uuid": "212c9536-b0b5-47c4-9529-cdd1bff760d9",
00:22:39.432      "assigned_rate_limits": {
00:22:39.432        "rw_ios_per_sec": 0,
00:22:39.432        "rw_mbytes_per_sec": 0,
00:22:39.432        "r_mbytes_per_sec": 0,
00:22:39.432        "w_mbytes_per_sec": 0
00:22:39.432      },
00:22:39.432      "claimed": false,
00:22:39.432      "zoned": false,
00:22:39.432      "supported_io_types": {
00:22:39.432        "read": true,
00:22:39.432        "write": true,
00:22:39.432        "unmap": true,
00:22:39.432        "flush": false,
00:22:39.432        "reset": true,
00:22:39.432        "nvme_admin": false,
00:22:39.432        "nvme_io": false,
00:22:39.432        "nvme_io_md": false,
00:22:39.432        "write_zeroes": true,
00:22:39.432        "zcopy": false,
00:22:39.432        "get_zone_info": false,
00:22:39.432        "zone_management": false,
00:22:39.432        "zone_append": false,
00:22:39.432        "compare": false,
00:22:39.432        "compare_and_write": false,
00:22:39.432        "abort": false,
00:22:39.432        "seek_hole": true,
00:22:39.432        "seek_data": true,
00:22:39.432        "copy": false,
00:22:39.432        "nvme_iov_md": false
00:22:39.432      },
00:22:39.432      "driver_specific": {
00:22:39.432        "lvol": {
00:22:39.432          "lvol_store_uuid": "3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb",
00:22:39.432          "base_bdev": "nvme0n1",
00:22:39.432          "thin_provision": true,
00:22:39.432          "num_allocated_clusters": 0,
00:22:39.432          "snapshot": false,
00:22:39.432          "clone": false,
00:22:39.432          "esnap_clone": false
00:22:39.432        }
00:22:39.432      }
00:22:39.432    }
00:22:39.432  ]'
00:22:39.432      14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:39.432     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:39.432      14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:39.432     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:22:39.432     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:22:39.432     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:22:39.432    14:33:18 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171
00:22:39.432    14:33:18 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:22:39.690   14:33:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0
00:22:39.690    14:33:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:39.690    14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:39.690    14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:39.690    14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs
00:22:39.690    14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb
00:22:39.690     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 212c9536-b0b5-47c4-9529-cdd1bff760d9
00:22:40.256    14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[
00:22:40.256    {
00:22:40.256      "name": "212c9536-b0b5-47c4-9529-cdd1bff760d9",
00:22:40.256      "aliases": [
00:22:40.256        "lvs/nvme0n1p0"
00:22:40.256      ],
00:22:40.256      "product_name": "Logical Volume",
00:22:40.256      "block_size": 4096,
00:22:40.256      "num_blocks": 26476544,
00:22:40.256      "uuid": "212c9536-b0b5-47c4-9529-cdd1bff760d9",
00:22:40.256      "assigned_rate_limits": {
00:22:40.256        "rw_ios_per_sec": 0,
00:22:40.256        "rw_mbytes_per_sec": 0,
00:22:40.256        "r_mbytes_per_sec": 0,
00:22:40.256        "w_mbytes_per_sec": 0
00:22:40.256      },
00:22:40.256      "claimed": false,
00:22:40.256      "zoned": false,
00:22:40.256      "supported_io_types": {
00:22:40.256        "read": true,
00:22:40.256        "write": true,
00:22:40.256        "unmap": true,
00:22:40.256        "flush": false,
00:22:40.256        "reset": true,
00:22:40.256        "nvme_admin": false,
00:22:40.256        "nvme_io": false,
00:22:40.256        "nvme_io_md": false,
00:22:40.256        "write_zeroes": true,
00:22:40.256        "zcopy": false,
00:22:40.256        "get_zone_info": false,
00:22:40.256        "zone_management": false,
00:22:40.256        "zone_append": false,
00:22:40.256        "compare": false,
00:22:40.256        "compare_and_write": false,
00:22:40.256        "abort": false,
00:22:40.256        "seek_hole": true,
00:22:40.256        "seek_data": true,
00:22:40.256        "copy": false,
00:22:40.256        "nvme_iov_md": false
00:22:40.256      },
00:22:40.256      "driver_specific": {
00:22:40.256        "lvol": {
00:22:40.256          "lvol_store_uuid": "3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb",
00:22:40.256          "base_bdev": "nvme0n1",
00:22:40.256          "thin_provision": true,
00:22:40.256          "num_allocated_clusters": 0,
00:22:40.256          "snapshot": false,
00:22:40.256          "clone": false,
00:22:40.256          "esnap_clone": false
00:22:40.256        }
00:22:40.256      }
00:22:40.256    }
00:22:40.256  ]'
00:22:40.256     14:33:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:22:40.256    14:33:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096
00:22:40.256     14:33:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:22:40.256    14:33:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544
00:22:40.256    14:33:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:22:40.256    14:33:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424
00:22:40.256   14:33:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20
00:22:40.256   14:33:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 212c9536-b0b5-47c4-9529-cdd1bff760d9 -c nvc0n1p0 --l2p_dram_limit 20
00:22:40.514  [2024-11-20 14:33:19.338303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.514  [2024-11-20 14:33:19.338404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:22:40.514  [2024-11-20 14:33:19.338441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:22:40.514  [2024-11-20 14:33:19.338469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.514  [2024-11-20 14:33:19.338615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.514  [2024-11-20 14:33:19.338658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:22:40.514  [2024-11-20 14:33:19.338695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.104 ms
00:22:40.514  [2024-11-20 14:33:19.338722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.514  [2024-11-20 14:33:19.338771] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:22:40.514  [2024-11-20 14:33:19.340230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:22:40.514  [2024-11-20 14:33:19.340300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.340335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:22:40.515  [2024-11-20 14:33:19.340361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.540 ms
00:22:40.515  [2024-11-20 14:33:19.340387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.340591] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3e30eb3e-f635-442d-b990-45a8a3dfa9d4
00:22:40.515  [2024-11-20 14:33:19.341961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.342207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:22:40.515  [2024-11-20 14:33:19.342260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:22:40.515  [2024-11-20 14:33:19.342293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.348168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.348261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:22:40.515  [2024-11-20 14:33:19.348299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.733 ms
00:22:40.515  [2024-11-20 14:33:19.348322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.348529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.348563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:22:40.515  [2024-11-20 14:33:19.348631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.126 ms
00:22:40.515  [2024-11-20 14:33:19.348655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.348811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.348849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:22:40.515  [2024-11-20 14:33:19.348877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:22:40.515  [2024-11-20 14:33:19.348898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.348957] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:22:40.515  [2024-11-20 14:33:19.355987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.356245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:22:40.515  [2024-11-20 14:33:19.356291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.049 ms
00:22:40.515  [2024-11-20 14:33:19.356331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.356412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.356445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:22:40.515  [2024-11-20 14:33:19.356468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:22:40.515  [2024-11-20 14:33:19.356492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.356635] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:22:40.515  [2024-11-20 14:33:19.356859] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:22:40.515  [2024-11-20 14:33:19.356905] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:22:40.515  [2024-11-20 14:33:19.356941] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:22:40.515  [2024-11-20 14:33:19.356970] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:22:40.515  [2024-11-20 14:33:19.356999] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:22:40.515  [2024-11-20 14:33:19.357023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:22:40.515  [2024-11-20 14:33:19.357049] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:22:40.515  [2024-11-20 14:33:19.357078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:22:40.515  [2024-11-20 14:33:19.357106] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:22:40.515  [2024-11-20 14:33:19.357131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.357175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:22:40.515  [2024-11-20 14:33:19.357200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.499 ms
00:22:40.515  [2024-11-20 14:33:19.357225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.357356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.515  [2024-11-20 14:33:19.357398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:22:40.515  [2024-11-20 14:33:19.357424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.083 ms
00:22:40.515  [2024-11-20 14:33:19.357451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.515  [2024-11-20 14:33:19.357611] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:22:40.515  [2024-11-20 14:33:19.357652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:22:40.515  [2024-11-20 14:33:19.357689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:22:40.515  [2024-11-20 14:33:19.357716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:40.515  [2024-11-20 14:33:19.357739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:22:40.515  [2024-11-20 14:33:19.357766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:22:40.515  [2024-11-20 14:33:19.357789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:22:40.515  [2024-11-20 14:33:19.357817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:22:40.515  [2024-11-20 14:33:19.357841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:22:40.515  [2024-11-20 14:33:19.357866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:22:40.515  [2024-11-20 14:33:19.357888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:22:40.515  [2024-11-20 14:33:19.357914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:22:40.515  [2024-11-20 14:33:19.357937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:22:40.515  [2024-11-20 14:33:19.357983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:22:40.515  [2024-11-20 14:33:19.358009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:22:40.515  [2024-11-20 14:33:19.358041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:40.515  [2024-11-20 14:33:19.358064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:22:40.515  [2024-11-20 14:33:19.358089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:22:40.515  [2024-11-20 14:33:19.358109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:40.515  [2024-11-20 14:33:19.358135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:22:40.515  [2024-11-20 14:33:19.358156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:22:40.515  [2024-11-20 14:33:19.358179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:40.515  [2024-11-20 14:33:19.358200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:22:40.515  [2024-11-20 14:33:19.358224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:22:40.515  [2024-11-20 14:33:19.358249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:40.515  [2024-11-20 14:33:19.358272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:22:40.515  [2024-11-20 14:33:19.358293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:22:40.515  [2024-11-20 14:33:19.358319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:40.515  [2024-11-20 14:33:19.358341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:22:40.515  [2024-11-20 14:33:19.358368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:22:40.515  [2024-11-20 14:33:19.358389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:22:40.516  [2024-11-20 14:33:19.358417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:22:40.516  [2024-11-20 14:33:19.358439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:22:40.516  [2024-11-20 14:33:19.358465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:22:40.516  [2024-11-20 14:33:19.358487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:22:40.516  [2024-11-20 14:33:19.358512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:22:40.516  [2024-11-20 14:33:19.358533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:22:40.516  [2024-11-20 14:33:19.358558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:22:40.516  [2024-11-20 14:33:19.358621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:22:40.516  [2024-11-20 14:33:19.358651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:40.516  [2024-11-20 14:33:19.358672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:22:40.516  [2024-11-20 14:33:19.358695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:22:40.516  [2024-11-20 14:33:19.358715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:40.516  [2024-11-20 14:33:19.358738] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:22:40.516  [2024-11-20 14:33:19.358761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:22:40.516  [2024-11-20 14:33:19.358792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:22:40.516  [2024-11-20 14:33:19.358816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:22:40.516  [2024-11-20 14:33:19.358858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:22:40.516  [2024-11-20 14:33:19.358882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:22:40.516  [2024-11-20 14:33:19.358905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:22:40.516  [2024-11-20 14:33:19.358925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:22:40.516  [2024-11-20 14:33:19.358949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:22:40.516  [2024-11-20 14:33:19.358971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:22:40.516  [2024-11-20 14:33:19.358999] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:22:40.516  [2024-11-20 14:33:19.359025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:22:40.516  [2024-11-20 14:33:19.359074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:22:40.516  [2024-11-20 14:33:19.359099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:22:40.516  [2024-11-20 14:33:19.359120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:22:40.516  [2024-11-20 14:33:19.359145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:22:40.516  [2024-11-20 14:33:19.359169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:22:40.516  [2024-11-20 14:33:19.359195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:22:40.516  [2024-11-20 14:33:19.359218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:22:40.516  [2024-11-20 14:33:19.359248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:22:40.516  [2024-11-20 14:33:19.359272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:22:40.516  [2024-11-20 14:33:19.359430] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:22:40.516  [2024-11-20 14:33:19.359458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:22:40.516  [2024-11-20 14:33:19.359511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:22:40.516  [2024-11-20 14:33:19.359536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:22:40.516  [2024-11-20 14:33:19.359558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:22:40.516  [2024-11-20 14:33:19.359611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:40.516  [2024-11-20 14:33:19.359641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:22:40.516  [2024-11-20 14:33:19.359667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.096 ms
00:22:40.516  [2024-11-20 14:33:19.359688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:40.516  [2024-11-20 14:33:19.359817] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:22:40.516  [2024-11-20 14:33:19.359850] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:22:42.471  [2024-11-20 14:33:21.389246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.471  [2024-11-20 14:33:21.389390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:22:42.471  [2024-11-20 14:33:21.389437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2029.422 ms
00:22:42.471  [2024-11-20 14:33:21.389454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.729  [2024-11-20 14:33:21.438045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.729  [2024-11-20 14:33:21.438144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:22:42.729  [2024-11-20 14:33:21.438184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 48.090 ms
00:22:42.729  [2024-11-20 14:33:21.438212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.729  [2024-11-20 14:33:21.438465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.438497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:22:42.730  [2024-11-20 14:33:21.438529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.080 ms
00:22:42.730  [2024-11-20 14:33:21.438549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.507533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.507647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:22:42.730  [2024-11-20 14:33:21.507685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 68.860 ms
00:22:42.730  [2024-11-20 14:33:21.507705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.507790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.507820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:22:42.730  [2024-11-20 14:33:21.507845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:22:42.730  [2024-11-20 14:33:21.507864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.508383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.508422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:22:42.730  [2024-11-20 14:33:21.508450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.361 ms
00:22:42.730  [2024-11-20 14:33:21.508469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.508714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.508744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:22:42.730  [2024-11-20 14:33:21.508770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.201 ms
00:22:42.730  [2024-11-20 14:33:21.508789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.529150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.529224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:22:42.730  [2024-11-20 14:33:21.529250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.319 ms
00:22:42.730  [2024-11-20 14:33:21.529263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.542989] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB
00:22:42.730  [2024-11-20 14:33:21.548158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.548221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:22:42.730  [2024-11-20 14:33:21.548243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.742 ms
00:22:42.730  [2024-11-20 14:33:21.548258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.603424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.603520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:22:42.730  [2024-11-20 14:33:21.603543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 55.105 ms
00:22:42.730  [2024-11-20 14:33:21.603559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.603843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.603873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:22:42.730  [2024-11-20 14:33:21.603888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.180 ms
00:22:42.730  [2024-11-20 14:33:21.603905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.636296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.636614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:22:42.730  [2024-11-20 14:33:21.636650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.292 ms
00:22:42.730  [2024-11-20 14:33:21.636668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.668747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.668838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:22:42.730  [2024-11-20 14:33:21.668861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.998 ms
00:22:42.730  [2024-11-20 14:33:21.668876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.730  [2024-11-20 14:33:21.669671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.730  [2024-11-20 14:33:21.669711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:22:42.730  [2024-11-20 14:33:21.669728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.707 ms
00:22:42.730  [2024-11-20 14:33:21.669742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.754667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.988  [2024-11-20 14:33:21.754786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:22:42.988  [2024-11-20 14:33:21.754809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 84.824 ms
00:22:42.988  [2024-11-20 14:33:21.754825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.789636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.988  [2024-11-20 14:33:21.789738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:22:42.988  [2024-11-20 14:33:21.789764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.654 ms
00:22:42.988  [2024-11-20 14:33:21.789780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.823071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.988  [2024-11-20 14:33:21.823159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:22:42.988  [2024-11-20 14:33:21.823181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.233 ms
00:22:42.988  [2024-11-20 14:33:21.823196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.856203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.988  [2024-11-20 14:33:21.856383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:22:42.988  [2024-11-20 14:33:21.856413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.948 ms
00:22:42.988  [2024-11-20 14:33:21.856429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.856488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.988  [2024-11-20 14:33:21.856510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:22:42.988  [2024-11-20 14:33:21.856524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:22:42.988  [2024-11-20 14:33:21.856538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.856709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:42.988  [2024-11-20 14:33:21.856735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:22:42.988  [2024-11-20 14:33:21.856748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:22:42.988  [2024-11-20 14:33:21.856762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:42.988  [2024-11-20 14:33:21.857871] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2519.089 ms, result 0
00:22:42.988  {
00:22:42.988    "name": "ftl0",
00:22:42.988    "uuid": "3e30eb3e-f635-442d-b990-45a8a3dfa9d4"
00:22:42.988  }
00:22:42.988   14:33:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0
00:22:42.988   14:33:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name
00:22:42.988   14:33:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0
00:22:43.246   14:33:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632
00:22:43.504  [2024-11-20 14:33:22.330416] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:22:43.504  I/O size of 69632 is greater than zero copy threshold (65536).
00:22:43.504  Zero copy mechanism will not be used.
00:22:43.504  Running I/O for 4 seconds...
00:22:45.369       1953.00 IOPS,   129.69 MiB/s
[2024-11-20T14:33:25.724Z]      1957.00 IOPS,   129.96 MiB/s
[2024-11-20T14:33:26.672Z]      1938.33 IOPS,   128.72 MiB/s
[2024-11-20T14:33:26.672Z]      1935.25 IOPS,   128.51 MiB/s
00:22:47.690                                                                                                  Latency(us)
00:22:47.690  
[2024-11-20T14:33:26.672Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:47.690  Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632)
00:22:47.690  	 ftl0                :       4.00    1934.25     128.45       0.00     0.00     540.34     242.04    2487.39
00:22:47.690  
[2024-11-20T14:33:26.672Z]  ===================================================================================================================
00:22:47.690  
[2024-11-20T14:33:26.672Z]  Total                       :               1934.25     128.45       0.00     0.00     540.34     242.04    2487.39
00:22:47.690  [2024-11-20 14:33:26.342819] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:22:47.690  {
00:22:47.690    "results": [
00:22:47.690      {
00:22:47.690        "job": "ftl0",
00:22:47.690        "core_mask": "0x1",
00:22:47.690        "workload": "randwrite",
00:22:47.690        "status": "finished",
00:22:47.690        "queue_depth": 1,
00:22:47.690        "io_size": 69632,
00:22:47.690        "runtime": 4.002575,
00:22:47.690        "iops": 1934.2548234573992,
00:22:47.690        "mibps": 128.4466093702179,
00:22:47.690        "io_failed": 0,
00:22:47.690        "io_timeout": 0,
00:22:47.690        "avg_latency_us": 540.3411437025903,
00:22:47.690        "min_latency_us": 242.03636363636363,
00:22:47.690        "max_latency_us": 2487.389090909091
00:22:47.690      }
00:22:47.690    ],
00:22:47.690    "core_count": 1
00:22:47.690  }
00:22:47.690   14:33:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096
00:22:47.691  [2024-11-20 14:33:26.485559] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:22:47.691  Running I/O for 4 seconds...
00:22:49.559       7482.00 IOPS,    29.23 MiB/s
[2024-11-20T14:33:29.914Z]      7113.00 IOPS,    27.79 MiB/s
[2024-11-20T14:33:30.850Z]      7091.33 IOPS,    27.70 MiB/s
[2024-11-20T14:33:30.850Z]      7041.50 IOPS,    27.51 MiB/s
00:22:51.868                                                                                                  Latency(us)
00:22:51.868  
[2024-11-20T14:33:30.850Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:51.868  Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096)
00:22:51.868  	 ftl0                :       4.02    7032.24      27.47       0.00     0.00   18146.88     387.26   40989.79
00:22:51.868  
[2024-11-20T14:33:30.850Z]  ===================================================================================================================
00:22:51.868  
[2024-11-20T14:33:30.850Z]  Total                       :               7032.24      27.47       0.00     0.00   18146.88       0.00   40989.79
00:22:51.868  [2024-11-20 14:33:30.520238] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:22:51.868  {
00:22:51.868    "results": [
00:22:51.868      {
00:22:51.868        "job": "ftl0",
00:22:51.868        "core_mask": "0x1",
00:22:51.868        "workload": "randwrite",
00:22:51.868        "status": "finished",
00:22:51.868        "queue_depth": 128,
00:22:51.868        "io_size": 4096,
00:22:51.868        "runtime": 4.023186,
00:22:51.868        "iops": 7032.237634551323,
00:22:51.868        "mibps": 27.469678259966106,
00:22:51.868        "io_failed": 0,
00:22:51.868        "io_timeout": 0,
00:22:51.868        "avg_latency_us": 18146.876253100778,
00:22:51.868        "min_latency_us": 387.2581818181818,
00:22:51.868        "max_latency_us": 40989.78909090909
00:22:51.868      }
00:22:51.868    ],
00:22:51.868    "core_count": 1
00:22:51.868  }
00:22:51.868   14:33:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096
00:22:51.868  [2024-11-20 14:33:30.702431] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0
00:22:51.868  Running I/O for 4 seconds...
00:22:53.740       5409.00 IOPS,    21.13 MiB/s
[2024-11-20T14:33:34.096Z]      5691.00 IOPS,    22.23 MiB/s
[2024-11-20T14:33:35.030Z]      5768.33 IOPS,    22.53 MiB/s
[2024-11-20T14:33:35.030Z]      5768.25 IOPS,    22.53 MiB/s
00:22:56.048                                                                                                  Latency(us)
00:22:56.048  
[2024-11-20T14:33:35.030Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:56.048  Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:22:56.048  	 Verification LBA range: start 0x0 length 0x1400000
00:22:56.048  	 ftl0                :       4.02    5774.33      22.56       0.00     0.00   22079.27     390.98   29789.09
00:22:56.048  
[2024-11-20T14:33:35.030Z]  ===================================================================================================================
00:22:56.048  
[2024-11-20T14:33:35.030Z]  Total                       :               5774.33      22.56       0.00     0.00   22079.27       0.00   29789.09
00:22:56.048  [2024-11-20 14:33:34.742472] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0
00:22:56.048  {
00:22:56.048    "results": [
00:22:56.048      {
00:22:56.048        "job": "ftl0",
00:22:56.048        "core_mask": "0x1",
00:22:56.048        "workload": "verify",
00:22:56.048        "status": "finished",
00:22:56.048        "verify_range": {
00:22:56.048          "start": 0,
00:22:56.048          "length": 20971520
00:22:56.048        },
00:22:56.048        "queue_depth": 128,
00:22:56.048        "io_size": 4096,
00:22:56.048        "runtime": 4.017954,
00:22:56.048        "iops": 5774.3319112165045,
00:22:56.048        "mibps": 22.55598402818947,
00:22:56.048        "io_failed": 0,
00:22:56.048        "io_timeout": 0,
00:22:56.048        "avg_latency_us": 22079.272929144903,
00:22:56.048        "min_latency_us": 390.9818181818182,
00:22:56.048        "max_latency_us": 29789.090909090908
00:22:56.048      }
00:22:56.048    ],
00:22:56.048    "core_count": 1
00:22:56.048  }
00:22:56.048   14:33:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0
00:22:56.327  [2024-11-20 14:33:35.038881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.327  [2024-11-20 14:33:35.038964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:22:56.327  [2024-11-20 14:33:35.038990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:22:56.327  [2024-11-20 14:33:35.039008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.327  [2024-11-20 14:33:35.039049] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:22:56.327  [2024-11-20 14:33:35.043259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.327  [2024-11-20 14:33:35.043310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:22:56.327  [2024-11-20 14:33:35.043335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.174 ms
00:22:56.327  [2024-11-20 14:33:35.043350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.327  [2024-11-20 14:33:35.044896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.327  [2024-11-20 14:33:35.044949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:22:56.327  [2024-11-20 14:33:35.044978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.484 ms
00:22:56.328  [2024-11-20 14:33:35.044994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.328  [2024-11-20 14:33:35.250742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.328  [2024-11-20 14:33:35.250821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:22:56.328  [2024-11-20 14:33:35.250852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 205.705 ms
00:22:56.328  [2024-11-20 14:33:35.250866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.328  [2024-11-20 14:33:35.257689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.328  [2024-11-20 14:33:35.257873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:22:56.328  [2024-11-20 14:33:35.257908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.768 ms
00:22:56.328  [2024-11-20 14:33:35.257926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.328  [2024-11-20 14:33:35.291965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.328  [2024-11-20 14:33:35.292040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:22:56.328  [2024-11-20 14:33:35.292064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.954 ms
00:22:56.328  [2024-11-20 14:33:35.292077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.311095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.588  [2024-11-20 14:33:35.311167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:22:56.588  [2024-11-20 14:33:35.311191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.934 ms
00:22:56.588  [2024-11-20 14:33:35.311205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.311420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.588  [2024-11-20 14:33:35.311444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:22:56.588  [2024-11-20 14:33:35.311464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.149 ms
00:22:56.588  [2024-11-20 14:33:35.311476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.343430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.588  [2024-11-20 14:33:35.343501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:22:56.588  [2024-11-20 14:33:35.343525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.918 ms
00:22:56.588  [2024-11-20 14:33:35.343538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.375908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.588  [2024-11-20 14:33:35.375972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:22:56.588  [2024-11-20 14:33:35.375996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.266 ms
00:22:56.588  [2024-11-20 14:33:35.376009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.407265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.588  [2024-11-20 14:33:35.407336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:22:56.588  [2024-11-20 14:33:35.407369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.185 ms
00:22:56.588  [2024-11-20 14:33:35.407384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.439083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.588  [2024-11-20 14:33:35.439158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:22:56.588  [2024-11-20 14:33:35.439186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.548 ms
00:22:56.588  [2024-11-20 14:33:35.439199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.588  [2024-11-20 14:33:35.439272] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:22:56.588  [2024-11-20 14:33:35.439299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.588  [2024-11-20 14:33:35.439772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.439966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.440989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:22:56.589  [2024-11-20 14:33:35.441175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:22:56.589  [2024-11-20 14:33:35.441189] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         3e30eb3e-f635-442d-b990-45a8a3dfa9d4
00:22:56.589  [2024-11-20 14:33:35.441205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:22:56.589  [2024-11-20 14:33:35.441218] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:22:56.589  [2024-11-20 14:33:35.441229] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:22:56.589  [2024-11-20 14:33:35.441244] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:22:56.589  [2024-11-20 14:33:35.441255] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:22:56.590  [2024-11-20 14:33:35.441269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:22:56.590  [2024-11-20 14:33:35.441281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:22:56.590  [2024-11-20 14:33:35.441295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:22:56.590  [2024-11-20 14:33:35.441306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:22:56.590  [2024-11-20 14:33:35.441321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.590  [2024-11-20 14:33:35.441333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:22:56.590  [2024-11-20 14:33:35.441353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.054 ms
00:22:56.590  [2024-11-20 14:33:35.441366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.590  [2024-11-20 14:33:35.458154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.590  [2024-11-20 14:33:35.458206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:22:56.590  [2024-11-20 14:33:35.458227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.699 ms
00:22:56.590  [2024-11-20 14:33:35.458240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.590  [2024-11-20 14:33:35.458707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:22:56.590  [2024-11-20 14:33:35.458732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:22:56.590  [2024-11-20 14:33:35.458749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.429 ms
00:22:56.590  [2024-11-20 14:33:35.458761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.590  [2024-11-20 14:33:35.505104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.590  [2024-11-20 14:33:35.505371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:22:56.590  [2024-11-20 14:33:35.505412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.590  [2024-11-20 14:33:35.505427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.590  [2024-11-20 14:33:35.505517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.590  [2024-11-20 14:33:35.505534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:22:56.590  [2024-11-20 14:33:35.505548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.590  [2024-11-20 14:33:35.505560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.590  [2024-11-20 14:33:35.505725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.590  [2024-11-20 14:33:35.505748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:22:56.590  [2024-11-20 14:33:35.505764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.590  [2024-11-20 14:33:35.505775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.590  [2024-11-20 14:33:35.505801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.590  [2024-11-20 14:33:35.505815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:22:56.590  [2024-11-20 14:33:35.505830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.590  [2024-11-20 14:33:35.505841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.611054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.611125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:22:56.849  [2024-11-20 14:33:35.611150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.611164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.698687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.698766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:22:56.849  [2024-11-20 14:33:35.698791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.698805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.698956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.698978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:22:56.849  [2024-11-20 14:33:35.698994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.699006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.699075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.699094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:22:56.849  [2024-11-20 14:33:35.699109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.699121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.699252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.699275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:22:56.849  [2024-11-20 14:33:35.699294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.699306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.699381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.699403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:22:56.849  [2024-11-20 14:33:35.699419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.699430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.699481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.699499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:22:56.849  [2024-11-20 14:33:35.699513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.699525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.699619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:22:56.849  [2024-11-20 14:33:35.699669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:22:56.849  [2024-11-20 14:33:35.699686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:22:56.849  [2024-11-20 14:33:35.699698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:22:56.849  [2024-11-20 14:33:35.699856] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 660.946 ms, result 0
00:22:56.849  true
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78151
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78151 ']'
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78151
00:22:56.849    14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:56.849    14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78151
00:22:56.849  killing process with pid 78151
00:22:56.849  Received shutdown signal, test time was about 4.000000 seconds
00:22:56.849  
00:22:56.849                                                                                                  Latency(us)
00:22:56.849  
[2024-11-20T14:33:35.831Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:22:56.849  
[2024-11-20T14:33:35.831Z]  ===================================================================================================================
00:22:56.849  
[2024-11-20T14:33:35.831Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78151'
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78151
00:22:56.849   14:33:35 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78151
00:22:57.783  Remove shared memory files
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:22:57.783   14:33:36 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f
00:22:58.042  ************************************
00:22:58.042  END TEST ftl_bdevperf
00:22:58.042  ************************************
00:22:58.042  
00:22:58.042  real	0m23.476s
00:22:58.042  user	0m28.273s
00:22:58.042  sys	0m1.139s
00:22:58.042   14:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:58.042   14:33:36 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:22:58.042   14:33:36 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0
00:22:58.042   14:33:36 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:22:58.042   14:33:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:58.042   14:33:36 ftl -- common/autotest_common.sh@10 -- # set +x
00:22:58.042  ************************************
00:22:58.042  START TEST ftl_trim
00:22:58.042  ************************************
00:22:58.042   14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0
00:22:58.042  * Looking for test storage...
00:22:58.042  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:22:58.042     14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version
00:22:58.042     14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-:
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-:
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<'
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:58.042     14:33:36 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:58.042    14:33:36 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:22:58.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.042  		--rc genhtml_branch_coverage=1
00:22:58.042  		--rc genhtml_function_coverage=1
00:22:58.042  		--rc genhtml_legend=1
00:22:58.042  		--rc geninfo_all_blocks=1
00:22:58.042  		--rc geninfo_unexecuted_blocks=1
00:22:58.042  		
00:22:58.042  		'
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:22:58.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.042  		--rc genhtml_branch_coverage=1
00:22:58.042  		--rc genhtml_function_coverage=1
00:22:58.042  		--rc genhtml_legend=1
00:22:58.042  		--rc geninfo_all_blocks=1
00:22:58.042  		--rc geninfo_unexecuted_blocks=1
00:22:58.042  		
00:22:58.042  		'
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:22:58.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.042  		--rc genhtml_branch_coverage=1
00:22:58.042  		--rc genhtml_function_coverage=1
00:22:58.042  		--rc genhtml_legend=1
00:22:58.042  		--rc geninfo_all_blocks=1
00:22:58.042  		--rc geninfo_unexecuted_blocks=1
00:22:58.042  		
00:22:58.042  		'
00:22:58.042    14:33:36 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:22:58.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:58.043  		--rc genhtml_branch_coverage=1
00:22:58.043  		--rc genhtml_function_coverage=1
00:22:58.043  		--rc genhtml_legend=1
00:22:58.043  		--rc geninfo_all_blocks=1
00:22:58.043  		--rc geninfo_unexecuted_blocks=1
00:22:58.043  		
00:22:58.043  		'
00:22:58.043   14:33:36 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:22:58.043      14:33:36 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh
00:22:58.043     14:33:36 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:22:58.043     14:33:36 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:22:58.043    14:33:36 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid=
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:22:58.043    14:33:37 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]]
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78503
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78503
00:22:58.043   14:33:37 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:22:58.043   14:33:37 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78503 ']'
00:22:58.043   14:33:37 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:58.043   14:33:37 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:58.043   14:33:37 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:58.043  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:58.043   14:33:37 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:58.043   14:33:37 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:22:58.339  [2024-11-20 14:33:37.140304] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:22:58.339  [2024-11-20 14:33:37.140493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78503 ]
00:22:58.597  [2024-11-20 14:33:37.325923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:22:58.597  [2024-11-20 14:33:37.456140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:22:58.597  [2024-11-20 14:33:37.456286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:58.597  [2024-11-20 14:33:37.456300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:22:59.533   14:33:38 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:59.533   14:33:38 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:22:59.533    14:33:38 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:22:59.533    14:33:38 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0
00:22:59.533    14:33:38 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:22:59.533    14:33:38 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424
00:22:59.533    14:33:38 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev
00:22:59.533     14:33:38 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:22:59.791    14:33:38 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:22:59.791    14:33:38 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size
00:22:59.791     14:33:38 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:22:59.791     14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:22:59.791     14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:22:59.791     14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:22:59.791     14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:22:59.791      14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:23:00.049     14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:00.049    {
00:23:00.049      "name": "nvme0n1",
00:23:00.049      "aliases": [
00:23:00.049        "ddfb73a4-80b3-43a3-995e-182d1ea207bd"
00:23:00.049      ],
00:23:00.049      "product_name": "NVMe disk",
00:23:00.049      "block_size": 4096,
00:23:00.049      "num_blocks": 1310720,
00:23:00.049      "uuid": "ddfb73a4-80b3-43a3-995e-182d1ea207bd",
00:23:00.049      "numa_id": -1,
00:23:00.049      "assigned_rate_limits": {
00:23:00.049        "rw_ios_per_sec": 0,
00:23:00.049        "rw_mbytes_per_sec": 0,
00:23:00.049        "r_mbytes_per_sec": 0,
00:23:00.049        "w_mbytes_per_sec": 0
00:23:00.049      },
00:23:00.049      "claimed": true,
00:23:00.049      "claim_type": "read_many_write_one",
00:23:00.049      "zoned": false,
00:23:00.049      "supported_io_types": {
00:23:00.049        "read": true,
00:23:00.049        "write": true,
00:23:00.049        "unmap": true,
00:23:00.049        "flush": true,
00:23:00.049        "reset": true,
00:23:00.049        "nvme_admin": true,
00:23:00.049        "nvme_io": true,
00:23:00.049        "nvme_io_md": false,
00:23:00.049        "write_zeroes": true,
00:23:00.049        "zcopy": false,
00:23:00.049        "get_zone_info": false,
00:23:00.049        "zone_management": false,
00:23:00.049        "zone_append": false,
00:23:00.049        "compare": true,
00:23:00.049        "compare_and_write": false,
00:23:00.049        "abort": true,
00:23:00.049        "seek_hole": false,
00:23:00.049        "seek_data": false,
00:23:00.049        "copy": true,
00:23:00.049        "nvme_iov_md": false
00:23:00.049      },
00:23:00.049      "driver_specific": {
00:23:00.049        "nvme": [
00:23:00.049          {
00:23:00.050            "pci_address": "0000:00:11.0",
00:23:00.050            "trid": {
00:23:00.050              "trtype": "PCIe",
00:23:00.050              "traddr": "0000:00:11.0"
00:23:00.050            },
00:23:00.050            "ctrlr_data": {
00:23:00.050              "cntlid": 0,
00:23:00.050              "vendor_id": "0x1b36",
00:23:00.050              "model_number": "QEMU NVMe Ctrl",
00:23:00.050              "serial_number": "12341",
00:23:00.050              "firmware_revision": "8.0.0",
00:23:00.050              "subnqn": "nqn.2019-08.org.qemu:12341",
00:23:00.050              "oacs": {
00:23:00.050                "security": 0,
00:23:00.050                "format": 1,
00:23:00.050                "firmware": 0,
00:23:00.050                "ns_manage": 1
00:23:00.050              },
00:23:00.050              "multi_ctrlr": false,
00:23:00.050              "ana_reporting": false
00:23:00.050            },
00:23:00.050            "vs": {
00:23:00.050              "nvme_version": "1.4"
00:23:00.050            },
00:23:00.050            "ns_data": {
00:23:00.050              "id": 1,
00:23:00.050              "can_share": false
00:23:00.050            }
00:23:00.050          }
00:23:00.050        ],
00:23:00.050        "mp_policy": "active_passive"
00:23:00.050      }
00:23:00.050    }
00:23:00.050  ]'
00:23:00.050      14:33:38 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:00.308     14:33:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:00.308      14:33:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:00.308     14:33:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720
00:23:00.308     14:33:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:23:00.308     14:33:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120
00:23:00.308    14:33:39 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120
00:23:00.308    14:33:39 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:23:00.308    14:33:39 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols
00:23:00.308     14:33:39 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:23:00.308     14:33:39 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:23:00.566    14:33:39 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb
00:23:00.566    14:33:39 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores
00:23:00.566    14:33:39 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e5ceb1f-9ed9-4ff0-b63c-edf6b36db8eb
00:23:00.824     14:33:39 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:23:01.081    14:33:39 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=f2ba17cc-3772-41b8-9c85-8fa3ad8de82f
00:23:01.081    14:33:39 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f2ba17cc-3772-41b8-9c85-8fa3ad8de82f
00:23:01.339   14:33:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=ff389443-24bd-4665-a6fe-1543a97162d6
00:23:01.339    14:33:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ff389443-24bd-4665-a6fe-1543a97162d6
00:23:01.339    14:33:40 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0
00:23:01.339    14:33:40 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:23:01.339    14:33:40 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=ff389443-24bd-4665-a6fe-1543a97162d6
00:23:01.339    14:33:40 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size=
00:23:01.339     14:33:40 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size ff389443-24bd-4665-a6fe-1543a97162d6
00:23:01.339     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ff389443-24bd-4665-a6fe-1543a97162d6
00:23:01.339     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:01.339     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:01.339     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:01.339      14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff389443-24bd-4665-a6fe-1543a97162d6
00:23:01.601     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:01.601    {
00:23:01.601      "name": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:01.601      "aliases": [
00:23:01.601        "lvs/nvme0n1p0"
00:23:01.601      ],
00:23:01.601      "product_name": "Logical Volume",
00:23:01.601      "block_size": 4096,
00:23:01.601      "num_blocks": 26476544,
00:23:01.601      "uuid": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:01.601      "assigned_rate_limits": {
00:23:01.601        "rw_ios_per_sec": 0,
00:23:01.601        "rw_mbytes_per_sec": 0,
00:23:01.601        "r_mbytes_per_sec": 0,
00:23:01.601        "w_mbytes_per_sec": 0
00:23:01.601      },
00:23:01.601      "claimed": false,
00:23:01.601      "zoned": false,
00:23:01.601      "supported_io_types": {
00:23:01.601        "read": true,
00:23:01.601        "write": true,
00:23:01.601        "unmap": true,
00:23:01.601        "flush": false,
00:23:01.601        "reset": true,
00:23:01.601        "nvme_admin": false,
00:23:01.601        "nvme_io": false,
00:23:01.601        "nvme_io_md": false,
00:23:01.601        "write_zeroes": true,
00:23:01.601        "zcopy": false,
00:23:01.601        "get_zone_info": false,
00:23:01.601        "zone_management": false,
00:23:01.601        "zone_append": false,
00:23:01.601        "compare": false,
00:23:01.601        "compare_and_write": false,
00:23:01.601        "abort": false,
00:23:01.601        "seek_hole": true,
00:23:01.601        "seek_data": true,
00:23:01.601        "copy": false,
00:23:01.601        "nvme_iov_md": false
00:23:01.601      },
00:23:01.601      "driver_specific": {
00:23:01.601        "lvol": {
00:23:01.601          "lvol_store_uuid": "f2ba17cc-3772-41b8-9c85-8fa3ad8de82f",
00:23:01.601          "base_bdev": "nvme0n1",
00:23:01.601          "thin_provision": true,
00:23:01.601          "num_allocated_clusters": 0,
00:23:01.601          "snapshot": false,
00:23:01.601          "clone": false,
00:23:01.601          "esnap_clone": false
00:23:01.601        }
00:23:01.601      }
00:23:01.601    }
00:23:01.601  ]'
00:23:01.601      14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:01.601     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:01.601      14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:01.871     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:23:01.871     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:23:01.871     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:23:01.871    14:33:40 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171
00:23:01.871    14:33:40 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev
00:23:01.871     14:33:40 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:23:02.129    14:33:40 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:23:02.129    14:33:40 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]]
00:23:02.129     14:33:40 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size ff389443-24bd-4665-a6fe-1543a97162d6
00:23:02.129     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ff389443-24bd-4665-a6fe-1543a97162d6
00:23:02.129     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:02.129     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:02.129     14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:02.129      14:33:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff389443-24bd-4665-a6fe-1543a97162d6
00:23:02.387     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:02.387    {
00:23:02.387      "name": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:02.387      "aliases": [
00:23:02.387        "lvs/nvme0n1p0"
00:23:02.387      ],
00:23:02.387      "product_name": "Logical Volume",
00:23:02.387      "block_size": 4096,
00:23:02.387      "num_blocks": 26476544,
00:23:02.387      "uuid": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:02.387      "assigned_rate_limits": {
00:23:02.387        "rw_ios_per_sec": 0,
00:23:02.387        "rw_mbytes_per_sec": 0,
00:23:02.387        "r_mbytes_per_sec": 0,
00:23:02.387        "w_mbytes_per_sec": 0
00:23:02.387      },
00:23:02.387      "claimed": false,
00:23:02.387      "zoned": false,
00:23:02.387      "supported_io_types": {
00:23:02.387        "read": true,
00:23:02.387        "write": true,
00:23:02.387        "unmap": true,
00:23:02.387        "flush": false,
00:23:02.387        "reset": true,
00:23:02.387        "nvme_admin": false,
00:23:02.387        "nvme_io": false,
00:23:02.387        "nvme_io_md": false,
00:23:02.387        "write_zeroes": true,
00:23:02.387        "zcopy": false,
00:23:02.387        "get_zone_info": false,
00:23:02.387        "zone_management": false,
00:23:02.387        "zone_append": false,
00:23:02.387        "compare": false,
00:23:02.387        "compare_and_write": false,
00:23:02.387        "abort": false,
00:23:02.388        "seek_hole": true,
00:23:02.388        "seek_data": true,
00:23:02.388        "copy": false,
00:23:02.388        "nvme_iov_md": false
00:23:02.388      },
00:23:02.388      "driver_specific": {
00:23:02.388        "lvol": {
00:23:02.388          "lvol_store_uuid": "f2ba17cc-3772-41b8-9c85-8fa3ad8de82f",
00:23:02.388          "base_bdev": "nvme0n1",
00:23:02.388          "thin_provision": true,
00:23:02.388          "num_allocated_clusters": 0,
00:23:02.388          "snapshot": false,
00:23:02.388          "clone": false,
00:23:02.388          "esnap_clone": false
00:23:02.388        }
00:23:02.388      }
00:23:02.388    }
00:23:02.388  ]'
00:23:02.388      14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:02.388     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:02.388      14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:02.388     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:23:02.388     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:23:02.388     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:23:02.388    14:33:41 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171
00:23:02.388    14:33:41 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:23:02.953   14:33:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0
00:23:02.953   14:33:41 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60
00:23:02.953    14:33:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size ff389443-24bd-4665-a6fe-1543a97162d6
00:23:02.953    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=ff389443-24bd-4665-a6fe-1543a97162d6
00:23:02.953    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info
00:23:02.953    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs
00:23:02.953    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb
00:23:02.953     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff389443-24bd-4665-a6fe-1543a97162d6
00:23:02.953    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[
00:23:02.953    {
00:23:02.953      "name": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:02.953      "aliases": [
00:23:02.953        "lvs/nvme0n1p0"
00:23:02.953      ],
00:23:02.953      "product_name": "Logical Volume",
00:23:02.953      "block_size": 4096,
00:23:02.953      "num_blocks": 26476544,
00:23:02.953      "uuid": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:02.953      "assigned_rate_limits": {
00:23:02.953        "rw_ios_per_sec": 0,
00:23:02.953        "rw_mbytes_per_sec": 0,
00:23:02.953        "r_mbytes_per_sec": 0,
00:23:02.953        "w_mbytes_per_sec": 0
00:23:02.953      },
00:23:02.953      "claimed": false,
00:23:02.953      "zoned": false,
00:23:02.953      "supported_io_types": {
00:23:02.953        "read": true,
00:23:02.953        "write": true,
00:23:02.953        "unmap": true,
00:23:02.953        "flush": false,
00:23:02.953        "reset": true,
00:23:02.953        "nvme_admin": false,
00:23:02.953        "nvme_io": false,
00:23:02.953        "nvme_io_md": false,
00:23:02.953        "write_zeroes": true,
00:23:02.953        "zcopy": false,
00:23:02.953        "get_zone_info": false,
00:23:02.953        "zone_management": false,
00:23:02.953        "zone_append": false,
00:23:02.953        "compare": false,
00:23:02.953        "compare_and_write": false,
00:23:02.953        "abort": false,
00:23:02.953        "seek_hole": true,
00:23:02.953        "seek_data": true,
00:23:02.953        "copy": false,
00:23:02.953        "nvme_iov_md": false
00:23:02.953      },
00:23:02.953      "driver_specific": {
00:23:02.953        "lvol": {
00:23:02.953          "lvol_store_uuid": "f2ba17cc-3772-41b8-9c85-8fa3ad8de82f",
00:23:02.953          "base_bdev": "nvme0n1",
00:23:02.953          "thin_provision": true,
00:23:02.953          "num_allocated_clusters": 0,
00:23:02.953          "snapshot": false,
00:23:02.953          "clone": false,
00:23:02.953          "esnap_clone": false
00:23:02.953        }
00:23:02.953      }
00:23:02.953    }
00:23:02.953  ]'
00:23:02.953     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:23:03.211    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096
00:23:03.211     14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:23:03.211    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544
00:23:03.211    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:23:03.211    14:33:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424
00:23:03.211   14:33:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60
00:23:03.211   14:33:41 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ff389443-24bd-4665-a6fe-1543a97162d6 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10
00:23:03.470  [2024-11-20 14:33:42.310026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.310096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:03.470  [2024-11-20 14:33:42.310123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:03.470  [2024-11-20 14:33:42.310136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.313690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.313750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:03.470  [2024-11-20 14:33:42.313772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.514 ms
00:23:03.470  [2024-11-20 14:33:42.313785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.313987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:03.470  [2024-11-20 14:33:42.314980] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:03.470  [2024-11-20 14:33:42.315026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.315041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:03.470  [2024-11-20 14:33:42.315056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.054 ms
00:23:03.470  [2024-11-20 14:33:42.315068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.315306] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:03.470  [2024-11-20 14:33:42.316502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.316559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:23:03.470  [2024-11-20 14:33:42.316612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.025 ms
00:23:03.470  [2024-11-20 14:33:42.316631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.321737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.321829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:03.470  [2024-11-20 14:33:42.321852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.992 ms
00:23:03.470  [2024-11-20 14:33:42.321871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.322087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.322114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:03.470  [2024-11-20 14:33:42.322129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.106 ms
00:23:03.470  [2024-11-20 14:33:42.322148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.322204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.322229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:03.470  [2024-11-20 14:33:42.322243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:23:03.470  [2024-11-20 14:33:42.322260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.470  [2024-11-20 14:33:42.322308] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:03.470  [2024-11-20 14:33:42.326975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.470  [2024-11-20 14:33:42.327021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:03.471  [2024-11-20 14:33:42.327043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.676 ms
00:23:03.471  [2024-11-20 14:33:42.327055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.471  [2024-11-20 14:33:42.327176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.471  [2024-11-20 14:33:42.327195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:03.471  [2024-11-20 14:33:42.327211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:23:03.471  [2024-11-20 14:33:42.327245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.471  [2024-11-20 14:33:42.327293] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:23:03.471  [2024-11-20 14:33:42.327471] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:03.471  [2024-11-20 14:33:42.327498] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:03.471  [2024-11-20 14:33:42.327515] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:03.471  [2024-11-20 14:33:42.327532] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:03.471  [2024-11-20 14:33:42.327546] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:03.471  [2024-11-20 14:33:42.327560] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:03.471  [2024-11-20 14:33:42.327587] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:03.471  [2024-11-20 14:33:42.327603] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:03.471  [2024-11-20 14:33:42.327618] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:03.471  [2024-11-20 14:33:42.327633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.471  [2024-11-20 14:33:42.327645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:03.471  [2024-11-20 14:33:42.327659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.343 ms
00:23:03.471  [2024-11-20 14:33:42.327670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.471  [2024-11-20 14:33:42.327784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.471  [2024-11-20 14:33:42.327798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:03.471  [2024-11-20 14:33:42.327812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.070 ms
00:23:03.471  [2024-11-20 14:33:42.327823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.471  [2024-11-20 14:33:42.327966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:03.471  [2024-11-20 14:33:42.327991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:03.471  [2024-11-20 14:33:42.328008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:03.471  [2024-11-20 14:33:42.328045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:03.471  [2024-11-20 14:33:42.328081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:03.471  [2024-11-20 14:33:42.328105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:03.471  [2024-11-20 14:33:42.328117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:03.471  [2024-11-20 14:33:42.328129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:03.471  [2024-11-20 14:33:42.328139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:03.471  [2024-11-20 14:33:42.328152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:03.471  [2024-11-20 14:33:42.328162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:03.471  [2024-11-20 14:33:42.328187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:03.471  [2024-11-20 14:33:42.328229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:03.471  [2024-11-20 14:33:42.328262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:03.471  [2024-11-20 14:33:42.328297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:03.471  [2024-11-20 14:33:42.328330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:03.471  [2024-11-20 14:33:42.328367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:03.471  [2024-11-20 14:33:42.328389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:03.471  [2024-11-20 14:33:42.328400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:03.471  [2024-11-20 14:33:42.328412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:03.471  [2024-11-20 14:33:42.328422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:03.471  [2024-11-20 14:33:42.328434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:03.471  [2024-11-20 14:33:42.328444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:03.471  [2024-11-20 14:33:42.328467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:03.471  [2024-11-20 14:33:42.328484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328494] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:03.471  [2024-11-20 14:33:42.328508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:03.471  [2024-11-20 14:33:42.328519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:03.471  [2024-11-20 14:33:42.328560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:03.471  [2024-11-20 14:33:42.328605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:03.471  [2024-11-20 14:33:42.328619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:03.471  [2024-11-20 14:33:42.328632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:03.471  [2024-11-20 14:33:42.328643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:03.471  [2024-11-20 14:33:42.328655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:03.471  [2024-11-20 14:33:42.328671] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:03.471  [2024-11-20 14:33:42.328693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:03.471  [2024-11-20 14:33:42.328722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:03.471  [2024-11-20 14:33:42.328734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:03.471  [2024-11-20 14:33:42.328747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:03.471  [2024-11-20 14:33:42.328759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:03.471  [2024-11-20 14:33:42.328772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:03.471  [2024-11-20 14:33:42.328783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:03.471  [2024-11-20 14:33:42.328796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:03.471  [2024-11-20 14:33:42.328808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:03.471  [2024-11-20 14:33:42.328823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:03.471  [2024-11-20 14:33:42.328887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:03.471  [2024-11-20 14:33:42.328907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:03.471  [2024-11-20 14:33:42.328933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:03.471  [2024-11-20 14:33:42.328945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:03.472  [2024-11-20 14:33:42.328961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:03.472  [2024-11-20 14:33:42.328974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:03.472  [2024-11-20 14:33:42.328987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:03.472  [2024-11-20 14:33:42.328999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.080 ms
00:23:03.472  [2024-11-20 14:33:42.329012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:03.472  [2024-11-20 14:33:42.329107] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:23:03.472  [2024-11-20 14:33:42.329128] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:23:05.372  [2024-11-20 14:33:44.243714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.243816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:23:05.372  [2024-11-20 14:33:44.243843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1914.590 ms
00:23:05.372  [2024-11-20 14:33:44.243862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.372  [2024-11-20 14:33:44.283999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.284075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:05.372  [2024-11-20 14:33:44.284109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.684 ms
00:23:05.372  [2024-11-20 14:33:44.284127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.372  [2024-11-20 14:33:44.284353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.284382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:05.372  [2024-11-20 14:33:44.284407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.080 ms
00:23:05.372  [2024-11-20 14:33:44.284426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.372  [2024-11-20 14:33:44.342629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.342708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:05.372  [2024-11-20 14:33:44.342733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 58.118 ms
00:23:05.372  [2024-11-20 14:33:44.342750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.372  [2024-11-20 14:33:44.342930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.342960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:05.372  [2024-11-20 14:33:44.342977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:05.372  [2024-11-20 14:33:44.342993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.372  [2024-11-20 14:33:44.343385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.343429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:05.372  [2024-11-20 14:33:44.343446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.336 ms
00:23:05.372  [2024-11-20 14:33:44.343463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.372  [2024-11-20 14:33:44.343660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.372  [2024-11-20 14:33:44.343689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:05.372  [2024-11-20 14:33:44.343705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.153 ms
00:23:05.372  [2024-11-20 14:33:44.343723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.630  [2024-11-20 14:33:44.366855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.630  [2024-11-20 14:33:44.366944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:05.630  [2024-11-20 14:33:44.366969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.059 ms
00:23:05.630  [2024-11-20 14:33:44.366987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.630  [2024-11-20 14:33:44.384339] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:05.630  [2024-11-20 14:33:44.400918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.630  [2024-11-20 14:33:44.401012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:05.630  [2024-11-20 14:33:44.401040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.713 ms
00:23:05.630  [2024-11-20 14:33:44.401056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.630  [2024-11-20 14:33:44.465497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.630  [2024-11-20 14:33:44.465601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:23:05.630  [2024-11-20 14:33:44.465633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 64.265 ms
00:23:05.630  [2024-11-20 14:33:44.465649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.630  [2024-11-20 14:33:44.466047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.630  [2024-11-20 14:33:44.466090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:05.630  [2024-11-20 14:33:44.466115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.234 ms
00:23:05.631  [2024-11-20 14:33:44.466130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.631  [2024-11-20 14:33:44.508535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.631  [2024-11-20 14:33:44.508665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:23:05.631  [2024-11-20 14:33:44.508710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 42.336 ms
00:23:05.631  [2024-11-20 14:33:44.508736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.631  [2024-11-20 14:33:44.565797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.631  [2024-11-20 14:33:44.565878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:23:05.631  [2024-11-20 14:33:44.565914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 56.803 ms
00:23:05.631  [2024-11-20 14:33:44.565934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.631  [2024-11-20 14:33:44.567118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.631  [2024-11-20 14:33:44.567171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:05.631  [2024-11-20 14:33:44.567199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.962 ms
00:23:05.631  [2024-11-20 14:33:44.567216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.673590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.889  [2024-11-20 14:33:44.673662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:23:05.889  [2024-11-20 14:33:44.673691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 106.305 ms
00:23:05.889  [2024-11-20 14:33:44.673705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.707433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.889  [2024-11-20 14:33:44.707499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:23:05.889  [2024-11-20 14:33:44.707523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.567 ms
00:23:05.889  [2024-11-20 14:33:44.707536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.739713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.889  [2024-11-20 14:33:44.739769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:23:05.889  [2024-11-20 14:33:44.739792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.027 ms
00:23:05.889  [2024-11-20 14:33:44.739804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.771687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.889  [2024-11-20 14:33:44.771739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:05.889  [2024-11-20 14:33:44.771761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.758 ms
00:23:05.889  [2024-11-20 14:33:44.771794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.771945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.889  [2024-11-20 14:33:44.771980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:05.889  [2024-11-20 14:33:44.772002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:23:05.889  [2024-11-20 14:33:44.772014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.772112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:05.889  [2024-11-20 14:33:44.772128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:05.889  [2024-11-20 14:33:44.772142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.042 ms
00:23:05.889  [2024-11-20 14:33:44.772154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:05.889  [2024-11-20 14:33:44.773179] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:05.889  [2024-11-20 14:33:44.777323] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2462.849 ms, result 0
00:23:05.889  [2024-11-20 14:33:44.778222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:05.889  {
00:23:05.889    "name": "ftl0",
00:23:05.889    "uuid": "4beb1ddc-3b05-4a56-8819-65e82b329bd5"
00:23:05.889  }
00:23:05.889   14:33:44 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0
00:23:05.889   14:33:44 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0
00:23:05.889   14:33:44 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:23:05.889   14:33:44 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i
00:23:05.889   14:33:44 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:23:05.889   14:33:44 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:23:05.889   14:33:44 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:23:06.147   14:33:45 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000
00:23:06.406  [
00:23:06.406    {
00:23:06.406      "name": "ftl0",
00:23:06.406      "aliases": [
00:23:06.406        "4beb1ddc-3b05-4a56-8819-65e82b329bd5"
00:23:06.406      ],
00:23:06.406      "product_name": "FTL disk",
00:23:06.406      "block_size": 4096,
00:23:06.406      "num_blocks": 23592960,
00:23:06.406      "uuid": "4beb1ddc-3b05-4a56-8819-65e82b329bd5",
00:23:06.406      "assigned_rate_limits": {
00:23:06.406        "rw_ios_per_sec": 0,
00:23:06.406        "rw_mbytes_per_sec": 0,
00:23:06.406        "r_mbytes_per_sec": 0,
00:23:06.406        "w_mbytes_per_sec": 0
00:23:06.406      },
00:23:06.406      "claimed": false,
00:23:06.406      "zoned": false,
00:23:06.406      "supported_io_types": {
00:23:06.406        "read": true,
00:23:06.406        "write": true,
00:23:06.406        "unmap": true,
00:23:06.406        "flush": true,
00:23:06.406        "reset": false,
00:23:06.406        "nvme_admin": false,
00:23:06.406        "nvme_io": false,
00:23:06.406        "nvme_io_md": false,
00:23:06.406        "write_zeroes": true,
00:23:06.406        "zcopy": false,
00:23:06.406        "get_zone_info": false,
00:23:06.406        "zone_management": false,
00:23:06.406        "zone_append": false,
00:23:06.406        "compare": false,
00:23:06.406        "compare_and_write": false,
00:23:06.406        "abort": false,
00:23:06.406        "seek_hole": false,
00:23:06.406        "seek_data": false,
00:23:06.406        "copy": false,
00:23:06.406        "nvme_iov_md": false
00:23:06.406      },
00:23:06.406      "driver_specific": {
00:23:06.406        "ftl": {
00:23:06.406          "base_bdev": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:06.406          "cache": "nvc0n1p0"
00:23:06.406        }
00:23:06.406      }
00:23:06.406    }
00:23:06.406  ]
00:23:06.406   14:33:45 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0
00:23:06.406   14:33:45 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": ['
00:23:06.406   14:33:45 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:23:06.664   14:33:45 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}'
00:23:06.664    14:33:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0
00:23:06.923   14:33:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[
00:23:06.923    {
00:23:06.923      "name": "ftl0",
00:23:06.923      "aliases": [
00:23:06.923        "4beb1ddc-3b05-4a56-8819-65e82b329bd5"
00:23:06.923      ],
00:23:06.923      "product_name": "FTL disk",
00:23:06.923      "block_size": 4096,
00:23:06.923      "num_blocks": 23592960,
00:23:06.923      "uuid": "4beb1ddc-3b05-4a56-8819-65e82b329bd5",
00:23:06.923      "assigned_rate_limits": {
00:23:06.923        "rw_ios_per_sec": 0,
00:23:06.923        "rw_mbytes_per_sec": 0,
00:23:06.923        "r_mbytes_per_sec": 0,
00:23:06.923        "w_mbytes_per_sec": 0
00:23:06.923      },
00:23:06.923      "claimed": false,
00:23:06.923      "zoned": false,
00:23:06.923      "supported_io_types": {
00:23:06.923        "read": true,
00:23:06.923        "write": true,
00:23:06.923        "unmap": true,
00:23:06.923        "flush": true,
00:23:06.923        "reset": false,
00:23:06.923        "nvme_admin": false,
00:23:06.923        "nvme_io": false,
00:23:06.923        "nvme_io_md": false,
00:23:06.923        "write_zeroes": true,
00:23:06.923        "zcopy": false,
00:23:06.923        "get_zone_info": false,
00:23:06.923        "zone_management": false,
00:23:06.923        "zone_append": false,
00:23:06.923        "compare": false,
00:23:06.923        "compare_and_write": false,
00:23:06.923        "abort": false,
00:23:06.923        "seek_hole": false,
00:23:06.923        "seek_data": false,
00:23:06.923        "copy": false,
00:23:06.923        "nvme_iov_md": false
00:23:06.923      },
00:23:06.923      "driver_specific": {
00:23:06.923        "ftl": {
00:23:06.923          "base_bdev": "ff389443-24bd-4665-a6fe-1543a97162d6",
00:23:06.923          "cache": "nvc0n1p0"
00:23:06.923        }
00:23:06.923      }
00:23:06.923    }
00:23:06.923  ]'
00:23:06.923    14:33:45 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks'
00:23:07.181   14:33:45 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960
00:23:07.181   14:33:45 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:23:07.441  [2024-11-20 14:33:46.202594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.202663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:07.441  [2024-11-20 14:33:46.202688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:07.441  [2024-11-20 14:33:46.202706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.202755] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:07.441  [2024-11-20 14:33:46.206111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.206148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:07.441  [2024-11-20 14:33:46.206169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.327 ms
00:23:07.441  [2024-11-20 14:33:46.206181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.206800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.206835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:07.441  [2024-11-20 14:33:46.206853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.550 ms
00:23:07.441  [2024-11-20 14:33:46.206865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.210598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.210631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:07.441  [2024-11-20 14:33:46.210648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.695 ms
00:23:07.441  [2024-11-20 14:33:46.210660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.218328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.218364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:07.441  [2024-11-20 14:33:46.218383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.610 ms
00:23:07.441  [2024-11-20 14:33:46.218395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.250135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.250192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:07.441  [2024-11-20 14:33:46.250218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.619 ms
00:23:07.441  [2024-11-20 14:33:46.250230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.268910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.268962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:07.441  [2024-11-20 14:33:46.268984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.566 ms
00:23:07.441  [2024-11-20 14:33:46.269001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.269298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.269333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:07.441  [2024-11-20 14:33:46.269351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.146 ms
00:23:07.441  [2024-11-20 14:33:46.269362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.300793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.300843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:07.441  [2024-11-20 14:33:46.300865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.386 ms
00:23:07.441  [2024-11-20 14:33:46.300877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.332210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.332255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:07.441  [2024-11-20 14:33:46.332278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.225 ms
00:23:07.441  [2024-11-20 14:33:46.332290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.363292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.363338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:07.441  [2024-11-20 14:33:46.363359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.896 ms
00:23:07.441  [2024-11-20 14:33:46.363371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.395422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.441  [2024-11-20 14:33:46.395484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:07.441  [2024-11-20 14:33:46.395507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.797 ms
00:23:07.441  [2024-11-20 14:33:46.395519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.441  [2024-11-20 14:33:46.395658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:07.441  [2024-11-20 14:33:46.395688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.395999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.441  [2024-11-20 14:33:46.396218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.396989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:07.442  [2024-11-20 14:33:46.397109] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:07.442  [2024-11-20 14:33:46.397126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:07.442  [2024-11-20 14:33:46.397138] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:07.442  [2024-11-20 14:33:46.397151] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:07.442  [2024-11-20 14:33:46.397162] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:07.442  [2024-11-20 14:33:46.397179] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:07.442  [2024-11-20 14:33:46.397190] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:07.442  [2024-11-20 14:33:46.397203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:07.442  [2024-11-20 14:33:46.397215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:07.442  [2024-11-20 14:33:46.397227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:07.442  [2024-11-20 14:33:46.397237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:07.442  [2024-11-20 14:33:46.397251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.442  [2024-11-20 14:33:46.397263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:07.442  [2024-11-20 14:33:46.397278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.606 ms
00:23:07.442  [2024-11-20 14:33:46.397307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.442  [2024-11-20 14:33:46.414119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.442  [2024-11-20 14:33:46.414167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:07.442  [2024-11-20 14:33:46.414191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.763 ms
00:23:07.442  [2024-11-20 14:33:46.414204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.442  [2024-11-20 14:33:46.414725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:07.442  [2024-11-20 14:33:46.414758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:07.442  [2024-11-20 14:33:46.414777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.428 ms
00:23:07.442  [2024-11-20 14:33:46.414789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.473183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.473252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:07.701  [2024-11-20 14:33:46.473274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.473286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.473442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.473461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:07.701  [2024-11-20 14:33:46.473476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.473487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.473604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.473626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:07.701  [2024-11-20 14:33:46.473647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.473659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.473702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.473715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:07.701  [2024-11-20 14:33:46.473729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.473740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.585732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.585794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:07.701  [2024-11-20 14:33:46.585817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.585829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.671159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.671229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:07.701  [2024-11-20 14:33:46.671252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.671264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.671403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.671433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:07.701  [2024-11-20 14:33:46.671475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.671490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.671558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.671592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:07.701  [2024-11-20 14:33:46.671609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.671621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.671780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.671811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:07.701  [2024-11-20 14:33:46.671828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.671842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.671926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.671945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:07.701  [2024-11-20 14:33:46.671960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.671971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.672037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.672061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:07.701  [2024-11-20 14:33:46.672079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.672091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.672171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:07.701  [2024-11-20 14:33:46.672187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:07.701  [2024-11-20 14:33:46.672202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:07.701  [2024-11-20 14:33:46.672214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:07.701  [2024-11-20 14:33:46.672440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 469.852 ms, result 0
00:23:07.701  true
00:23:07.964   14:33:46 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78503
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78503 ']'
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78503
00:23:07.964    14:33:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:07.964    14:33:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78503
00:23:07.964  killing process with pid 78503
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78503'
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78503
00:23:07.964   14:33:46 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78503
00:23:13.233   14:33:51 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536
00:23:13.800  65536+0 records in
00:23:13.800  65536+0 records out
00:23:13.800  268435456 bytes (268 MB, 256 MiB) copied, 1.26139 s, 213 MB/s
00:23:13.800   14:33:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:14.057  [2024-11-20 14:33:52.844248] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:23:14.057  [2024-11-20 14:33:52.844424] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78707 ]
00:23:14.057  [2024-11-20 14:33:53.031734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:14.316  [2024-11-20 14:33:53.172784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:14.574  [2024-11-20 14:33:53.510937] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:14.574  [2024-11-20 14:33:53.511017] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:14.833  [2024-11-20 14:33:53.673617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.673688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:14.833  [2024-11-20 14:33:53.673708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:14.833  [2024-11-20 14:33:53.673720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.677110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.677156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:14.833  [2024-11-20 14:33:53.677173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.360 ms
00:23:14.833  [2024-11-20 14:33:53.677183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.677321] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:14.833  [2024-11-20 14:33:53.678269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:14.833  [2024-11-20 14:33:53.678313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.678327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:14.833  [2024-11-20 14:33:53.678339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.004 ms
00:23:14.833  [2024-11-20 14:33:53.678350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.679647] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:14.833  [2024-11-20 14:33:53.695907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.695961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:14.833  [2024-11-20 14:33:53.695981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.260 ms
00:23:14.833  [2024-11-20 14:33:53.695993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.696120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.696141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:14.833  [2024-11-20 14:33:53.696155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.028 ms
00:23:14.833  [2024-11-20 14:33:53.696166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.700536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.700597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:14.833  [2024-11-20 14:33:53.700615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.310 ms
00:23:14.833  [2024-11-20 14:33:53.700626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.700771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.700794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:14.833  [2024-11-20 14:33:53.700807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:23:14.833  [2024-11-20 14:33:53.700818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.700857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.700877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:14.833  [2024-11-20 14:33:53.700889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:23:14.833  [2024-11-20 14:33:53.700899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.700931] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:14.833  [2024-11-20 14:33:53.705196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.705242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:14.833  [2024-11-20 14:33:53.705256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.274 ms
00:23:14.833  [2024-11-20 14:33:53.705267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.705335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.833  [2024-11-20 14:33:53.705352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:14.833  [2024-11-20 14:33:53.705364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:23:14.833  [2024-11-20 14:33:53.705375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.833  [2024-11-20 14:33:53.705407] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:14.833  [2024-11-20 14:33:53.705440] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:14.833  [2024-11-20 14:33:53.705484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:14.833  [2024-11-20 14:33:53.705503] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:14.833  [2024-11-20 14:33:53.705631] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:14.833  [2024-11-20 14:33:53.705650] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:14.833  [2024-11-20 14:33:53.705664] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:14.833  [2024-11-20 14:33:53.705679] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:14.834  [2024-11-20 14:33:53.705698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:14.834  [2024-11-20 14:33:53.705709] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:14.834  [2024-11-20 14:33:53.705720] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:14.834  [2024-11-20 14:33:53.705730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:14.834  [2024-11-20 14:33:53.705740] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:14.834  [2024-11-20 14:33:53.705753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.834  [2024-11-20 14:33:53.705763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:14.834  [2024-11-20 14:33:53.705775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.349 ms
00:23:14.834  [2024-11-20 14:33:53.705785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.834  [2024-11-20 14:33:53.705890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.834  [2024-11-20 14:33:53.705910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:14.834  [2024-11-20 14:33:53.705922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.074 ms
00:23:14.834  [2024-11-20 14:33:53.705933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.834  [2024-11-20 14:33:53.706051] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:14.834  [2024-11-20 14:33:53.706080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:14.834  [2024-11-20 14:33:53.706093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:14.834  [2024-11-20 14:33:53.706127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:14.834  [2024-11-20 14:33:53.706157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:14.834  [2024-11-20 14:33:53.706177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:14.834  [2024-11-20 14:33:53.706187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:14.834  [2024-11-20 14:33:53.706197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:14.834  [2024-11-20 14:33:53.706220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:14.834  [2024-11-20 14:33:53.706231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:14.834  [2024-11-20 14:33:53.706240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:14.834  [2024-11-20 14:33:53.706260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:14.834  [2024-11-20 14:33:53.706289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:14.834  [2024-11-20 14:33:53.706319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:14.834  [2024-11-20 14:33:53.706348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:14.834  [2024-11-20 14:33:53.706377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:14.834  [2024-11-20 14:33:53.706406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:14.834  [2024-11-20 14:33:53.706431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:14.834  [2024-11-20 14:33:53.706441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:14.834  [2024-11-20 14:33:53.706451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:14.834  [2024-11-20 14:33:53.706462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:14.834  [2024-11-20 14:33:53.706472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:14.834  [2024-11-20 14:33:53.706482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:14.834  [2024-11-20 14:33:53.706502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:14.834  [2024-11-20 14:33:53.706511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:14.834  [2024-11-20 14:33:53.706532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:14.834  [2024-11-20 14:33:53.706542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:14.834  [2024-11-20 14:33:53.706592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:14.834  [2024-11-20 14:33:53.706607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:14.834  [2024-11-20 14:33:53.706618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:14.834  [2024-11-20 14:33:53.706628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:14.834  [2024-11-20 14:33:53.706637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:14.834  [2024-11-20 14:33:53.706647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:14.834  [2024-11-20 14:33:53.706659] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:14.834  [2024-11-20 14:33:53.706672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:14.834  [2024-11-20 14:33:53.706684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:14.834  [2024-11-20 14:33:53.706695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:14.834  [2024-11-20 14:33:53.706705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:14.834  [2024-11-20 14:33:53.706716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:14.834  [2024-11-20 14:33:53.706727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:14.834  [2024-11-20 14:33:53.706738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:14.834  [2024-11-20 14:33:53.706749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:14.834  [2024-11-20 14:33:53.706759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:14.834  [2024-11-20 14:33:53.706770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:14.834  [2024-11-20 14:33:53.706780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:14.834  [2024-11-20 14:33:53.706791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:14.834  [2024-11-20 14:33:53.706801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:14.834  [2024-11-20 14:33:53.706812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:14.834  [2024-11-20 14:33:53.706823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:14.835  [2024-11-20 14:33:53.706835] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:14.835  [2024-11-20 14:33:53.706849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:14.835  [2024-11-20 14:33:53.706860] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:14.835  [2024-11-20 14:33:53.706871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:14.835  [2024-11-20 14:33:53.706882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:14.835  [2024-11-20 14:33:53.706893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:14.835  [2024-11-20 14:33:53.706905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.706916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:14.835  [2024-11-20 14:33:53.706934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.925 ms
00:23:14.835  [2024-11-20 14:33:53.706945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.739711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.739774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:14.835  [2024-11-20 14:33:53.739793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.665 ms
00:23:14.835  [2024-11-20 14:33:53.739804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.740001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.740050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:14.835  [2024-11-20 14:33:53.740065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:23:14.835  [2024-11-20 14:33:53.740075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.793994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.794058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:14.835  [2024-11-20 14:33:53.794079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 53.882 ms
00:23:14.835  [2024-11-20 14:33:53.794097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.794262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.794282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:14.835  [2024-11-20 14:33:53.794295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:14.835  [2024-11-20 14:33:53.794306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.794642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.794671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:14.835  [2024-11-20 14:33:53.794685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.304 ms
00:23:14.835  [2024-11-20 14:33:53.794704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.794862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.794881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:14.835  [2024-11-20 14:33:53.794893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.124 ms
00:23:14.835  [2024-11-20 14:33:53.794904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:14.835  [2024-11-20 14:33:53.811949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:14.835  [2024-11-20 14:33:53.812010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:14.835  [2024-11-20 14:33:53.812029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.014 ms
00:23:14.835  [2024-11-20 14:33:53.812040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.093  [2024-11-20 14:33:53.828423] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4
00:23:15.093  [2024-11-20 14:33:53.828481] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:15.093  [2024-11-20 14:33:53.828501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.093  [2024-11-20 14:33:53.828514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:15.093  [2024-11-20 14:33:53.828527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.288 ms
00:23:15.093  [2024-11-20 14:33:53.828537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.093  [2024-11-20 14:33:53.858320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.093  [2024-11-20 14:33:53.858371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:15.093  [2024-11-20 14:33:53.858402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.669 ms
00:23:15.093  [2024-11-20 14:33:53.858414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.093  [2024-11-20 14:33:53.874234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.093  [2024-11-20 14:33:53.874279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:15.093  [2024-11-20 14:33:53.874296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.707 ms
00:23:15.093  [2024-11-20 14:33:53.874307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.093  [2024-11-20 14:33:53.889831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.093  [2024-11-20 14:33:53.889878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:15.093  [2024-11-20 14:33:53.889895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.426 ms
00:23:15.093  [2024-11-20 14:33:53.889906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.093  [2024-11-20 14:33:53.890734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.093  [2024-11-20 14:33:53.890771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:15.093  [2024-11-20 14:33:53.890786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.693 ms
00:23:15.093  [2024-11-20 14:33:53.890798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.093  [2024-11-20 14:33:53.963440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.093  [2024-11-20 14:33:53.963510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:15.094  [2024-11-20 14:33:53.963531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 72.606 ms
00:23:15.094  [2024-11-20 14:33:53.963543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:53.976203] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:15.094  [2024-11-20 14:33:53.990045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:53.990116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:15.094  [2024-11-20 14:33:53.990135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.325 ms
00:23:15.094  [2024-11-20 14:33:53.990147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:53.990294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:53.990319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:15.094  [2024-11-20 14:33:53.990332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:23:15.094  [2024-11-20 14:33:53.990343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:53.990412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:53.990428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:15.094  [2024-11-20 14:33:53.990440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:23:15.094  [2024-11-20 14:33:53.990450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:53.990497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:53.990516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:15.094  [2024-11-20 14:33:53.990531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.021 ms
00:23:15.094  [2024-11-20 14:33:53.990542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:53.990608] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:15.094  [2024-11-20 14:33:53.990632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:53.990644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:15.094  [2024-11-20 14:33:53.990656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.026 ms
00:23:15.094  [2024-11-20 14:33:53.990667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:54.021827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:54.021888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:15.094  [2024-11-20 14:33:54.021906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.131 ms
00:23:15.094  [2024-11-20 14:33:54.021918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:54.022063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:15.094  [2024-11-20 14:33:54.022084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:15.094  [2024-11-20 14:33:54.022098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.046 ms
00:23:15.094  [2024-11-20 14:33:54.022108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:15.094  [2024-11-20 14:33:54.023184] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:15.094  [2024-11-20 14:33:54.027426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 349.268 ms, result 0
00:23:15.094  [2024-11-20 14:33:54.028297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:15.094  [2024-11-20 14:33:54.044972] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:16.485  
[2024-11-20T14:33:56.402Z] Copying: 25/256 [MB] (25 MBps)
[2024-11-20T14:33:57.338Z] Copying: 51/256 [MB] (26 MBps)
[2024-11-20T14:33:58.272Z] Copying: 77/256 [MB] (26 MBps)
[2024-11-20T14:33:59.206Z] Copying: 105/256 [MB] (27 MBps)
[2024-11-20T14:34:00.139Z] Copying: 130/256 [MB] (25 MBps)
[2024-11-20T14:34:01.128Z] Copying: 157/256 [MB] (27 MBps)
[2024-11-20T14:34:02.085Z] Copying: 185/256 [MB] (27 MBps)
[2024-11-20T14:34:03.459Z] Copying: 213/256 [MB] (27 MBps)
[2024-11-20T14:34:04.028Z] Copying: 238/256 [MB] (25 MBps)
[2024-11-20T14:34:04.028Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-20 14:34:03.755741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:25.046  [2024-11-20 14:34:03.768019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.046  [2024-11-20 14:34:03.768068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:25.046  [2024-11-20 14:34:03.768088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:25.046  [2024-11-20 14:34:03.768100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.046  [2024-11-20 14:34:03.768141] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:25.046  [2024-11-20 14:34:03.771468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.046  [2024-11-20 14:34:03.771504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:25.046  [2024-11-20 14:34:03.771519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.304 ms
00:23:25.046  [2024-11-20 14:34:03.771529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.046  [2024-11-20 14:34:03.773173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.046  [2024-11-20 14:34:03.773218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:25.046  [2024-11-20 14:34:03.773234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.614 ms
00:23:25.046  [2024-11-20 14:34:03.773245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.046  [2024-11-20 14:34:03.780524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.046  [2024-11-20 14:34:03.780566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:25.046  [2024-11-20 14:34:03.780602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.253 ms
00:23:25.046  [2024-11-20 14:34:03.780614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.046  [2024-11-20 14:34:03.788154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.046  [2024-11-20 14:34:03.788191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:25.046  [2024-11-20 14:34:03.788206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.477 ms
00:23:25.046  [2024-11-20 14:34:03.788217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.046  [2024-11-20 14:34:03.819627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.819694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:25.047  [2024-11-20 14:34:03.819713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.356 ms
00:23:25.047  [2024-11-20 14:34:03.819724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.837798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.837851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:25.047  [2024-11-20 14:34:03.837887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.962 ms
00:23:25.047  [2024-11-20 14:34:03.837916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.838125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.838149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:25.047  [2024-11-20 14:34:03.838163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.097 ms
00:23:25.047  [2024-11-20 14:34:03.838174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.870006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.870066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:25.047  [2024-11-20 14:34:03.870085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.805 ms
00:23:25.047  [2024-11-20 14:34:03.870096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.903654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.903731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:25.047  [2024-11-20 14:34:03.903751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.477 ms
00:23:25.047  [2024-11-20 14:34:03.903763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.935046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.935107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:25.047  [2024-11-20 14:34:03.935127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.180 ms
00:23:25.047  [2024-11-20 14:34:03.935140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.966315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.047  [2024-11-20 14:34:03.966378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:25.047  [2024-11-20 14:34:03.966398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.043 ms
00:23:25.047  [2024-11-20 14:34:03.966409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.047  [2024-11-20 14:34:03.966486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:25.047  [2024-11-20 14:34:03.966535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.966998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.047  [2024-11-20 14:34:03.967292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:25.048  [2024-11-20 14:34:03.967749] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:25.048  [2024-11-20 14:34:03.967760] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:25.048  [2024-11-20 14:34:03.967772] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:25.048  [2024-11-20 14:34:03.967782] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:25.048  [2024-11-20 14:34:03.967793] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:25.048  [2024-11-20 14:34:03.967804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:25.048  [2024-11-20 14:34:03.967814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:25.048  [2024-11-20 14:34:03.967825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:25.048  [2024-11-20 14:34:03.967835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:25.048  [2024-11-20 14:34:03.967845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:25.048  [2024-11-20 14:34:03.967859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:25.048  [2024-11-20 14:34:03.967871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.048  [2024-11-20 14:34:03.967882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:25.048  [2024-11-20 14:34:03.967900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.387 ms
00:23:25.048  [2024-11-20 14:34:03.967911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.048  [2024-11-20 14:34:03.984665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.048  [2024-11-20 14:34:03.984715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:25.048  [2024-11-20 14:34:03.984733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.723 ms
00:23:25.048  [2024-11-20 14:34:03.984745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.048  [2024-11-20 14:34:03.985212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:25.048  [2024-11-20 14:34:03.985246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:25.048  [2024-11-20 14:34:03.985260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.408 ms
00:23:25.048  [2024-11-20 14:34:03.985271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.031714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.031783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:25.307  [2024-11-20 14:34:04.031802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.031814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.031981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.032021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:25.307  [2024-11-20 14:34:04.032035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.032046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.032126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.032145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:25.307  [2024-11-20 14:34:04.032158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.032169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.032194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.032207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:25.307  [2024-11-20 14:34:04.032240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.032251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.138892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.138988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:25.307  [2024-11-20 14:34:04.139009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.139022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.226062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.226140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:25.307  [2024-11-20 14:34:04.226160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.226171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.226267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.226285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:25.307  [2024-11-20 14:34:04.226297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.226309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.226344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.226357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:25.307  [2024-11-20 14:34:04.226369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.307  [2024-11-20 14:34:04.226386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.307  [2024-11-20 14:34:04.226512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.307  [2024-11-20 14:34:04.226542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:25.307  [2024-11-20 14:34:04.226556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.308  [2024-11-20 14:34:04.226566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.308  [2024-11-20 14:34:04.226646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.308  [2024-11-20 14:34:04.226673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:25.308  [2024-11-20 14:34:04.226686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.308  [2024-11-20 14:34:04.226697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.308  [2024-11-20 14:34:04.226752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.308  [2024-11-20 14:34:04.226768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:25.308  [2024-11-20 14:34:04.226780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.308  [2024-11-20 14:34:04.226790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.308  [2024-11-20 14:34:04.226846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:25.308  [2024-11-20 14:34:04.226862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:25.308  [2024-11-20 14:34:04.226874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:25.308  [2024-11-20 14:34:04.226890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:25.308  [2024-11-20 14:34:04.227069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 459.042 ms, result 0
00:23:26.682  
00:23:26.682  
00:23:26.682   14:34:05 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78832
00:23:26.682   14:34:05 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78832
00:23:26.682   14:34:05 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init
00:23:26.682   14:34:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78832 ']'
00:23:26.682   14:34:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:26.682   14:34:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:26.682  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:26.682   14:34:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:26.682   14:34:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:26.682   14:34:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:23:26.682  [2024-11-20 14:34:05.453792] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:23:26.682  [2024-11-20 14:34:05.454606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78832 ]
00:23:26.682  [2024-11-20 14:34:05.626726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:26.941  [2024-11-20 14:34:05.737186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:27.878   14:34:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:27.878   14:34:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:23:27.878   14:34:06 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config
00:23:27.878  [2024-11-20 14:34:06.856973] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:27.878  [2024-11-20 14:34:06.857052] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:28.157  [2024-11-20 14:34:07.046467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.046536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:28.157  [2024-11-20 14:34:07.046564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:23:28.157  [2024-11-20 14:34:07.046597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.050690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.050736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:28.157  [2024-11-20 14:34:07.050757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.063 ms
00:23:28.157  [2024-11-20 14:34:07.050770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.050912] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:28.157  [2024-11-20 14:34:07.051873] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:28.157  [2024-11-20 14:34:07.051919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.051934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:28.157  [2024-11-20 14:34:07.051949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.022 ms
00:23:28.157  [2024-11-20 14:34:07.051961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.053213] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:28.157  [2024-11-20 14:34:07.069799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.069861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:28.157  [2024-11-20 14:34:07.069882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.593 ms
00:23:28.157  [2024-11-20 14:34:07.069901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.070089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.070122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:28.157  [2024-11-20 14:34:07.070138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:23:28.157  [2024-11-20 14:34:07.070157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.074644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.074716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:28.157  [2024-11-20 14:34:07.074737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.409 ms
00:23:28.157  [2024-11-20 14:34:07.074756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.074977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.075019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:28.157  [2024-11-20 14:34:07.075036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.138 ms
00:23:28.157  [2024-11-20 14:34:07.075053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.075104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.075127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:28.157  [2024-11-20 14:34:07.075142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:23:28.157  [2024-11-20 14:34:07.075160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.075197] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:28.157  [2024-11-20 14:34:07.079519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.079558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:28.157  [2024-11-20 14:34:07.079594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.326 ms
00:23:28.157  [2024-11-20 14:34:07.079609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.079690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.079710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:28.157  [2024-11-20 14:34:07.079729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:23:28.157  [2024-11-20 14:34:07.079748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.079785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:28.157  [2024-11-20 14:34:07.079818] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:28.157  [2024-11-20 14:34:07.079883] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:28.157  [2024-11-20 14:34:07.079922] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:28.157  [2024-11-20 14:34:07.080045] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:28.157  [2024-11-20 14:34:07.080064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:28.157  [2024-11-20 14:34:07.080097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:28.157  [2024-11-20 14:34:07.080115] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080135] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:28.157  [2024-11-20 14:34:07.080163] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:28.157  [2024-11-20 14:34:07.080175] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:28.157  [2024-11-20 14:34:07.080191] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:28.157  [2024-11-20 14:34:07.080204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.080218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:28.157  [2024-11-20 14:34:07.080231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.429 ms
00:23:28.157  [2024-11-20 14:34:07.080245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.080374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.157  [2024-11-20 14:34:07.080394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:28.157  [2024-11-20 14:34:07.080408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:23:28.157  [2024-11-20 14:34:07.080421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.157  [2024-11-20 14:34:07.080536] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:28.157  [2024-11-20 14:34:07.080554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:28.157  [2024-11-20 14:34:07.080583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:28.157  [2024-11-20 14:34:07.080629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:28.157  [2024-11-20 14:34:07.080669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:28.157  [2024-11-20 14:34:07.080694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:28.157  [2024-11-20 14:34:07.080707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:28.157  [2024-11-20 14:34:07.080718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:28.157  [2024-11-20 14:34:07.080732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:28.157  [2024-11-20 14:34:07.080744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:28.157  [2024-11-20 14:34:07.080757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:28.157  [2024-11-20 14:34:07.080781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:28.157  [2024-11-20 14:34:07.080827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:28.157  [2024-11-20 14:34:07.080869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:28.157  [2024-11-20 14:34:07.080906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:28.157  [2024-11-20 14:34:07.080944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:28.157  [2024-11-20 14:34:07.080955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:28.157  [2024-11-20 14:34:07.080968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:28.158  [2024-11-20 14:34:07.080979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:28.158  [2024-11-20 14:34:07.080994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:28.158  [2024-11-20 14:34:07.081005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:28.158  [2024-11-20 14:34:07.081019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:28.158  [2024-11-20 14:34:07.081029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:28.158  [2024-11-20 14:34:07.081042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:28.158  [2024-11-20 14:34:07.081054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:28.158  [2024-11-20 14:34:07.081068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:28.158  [2024-11-20 14:34:07.081079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:28.158  [2024-11-20 14:34:07.081099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:28.158  [2024-11-20 14:34:07.081112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:28.158  [2024-11-20 14:34:07.081129] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:28.158  [2024-11-20 14:34:07.081148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:28.158  [2024-11-20 14:34:07.081165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:28.158  [2024-11-20 14:34:07.081179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:28.158  [2024-11-20 14:34:07.081196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:28.158  [2024-11-20 14:34:07.081209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:28.158  [2024-11-20 14:34:07.081227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:28.158  [2024-11-20 14:34:07.081240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:28.158  [2024-11-20 14:34:07.081256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:28.158  [2024-11-20 14:34:07.081269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:28.158  [2024-11-20 14:34:07.081288] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:28.158  [2024-11-20 14:34:07.081304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:28.158  [2024-11-20 14:34:07.081341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:28.158  [2024-11-20 14:34:07.081362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:28.158  [2024-11-20 14:34:07.081376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:28.158  [2024-11-20 14:34:07.081394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:28.158  [2024-11-20 14:34:07.081407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:28.158  [2024-11-20 14:34:07.081424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:28.158  [2024-11-20 14:34:07.081438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:28.158  [2024-11-20 14:34:07.081455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:28.158  [2024-11-20 14:34:07.081469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:28.158  [2024-11-20 14:34:07.081550] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:28.158  [2024-11-20 14:34:07.081565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:28.158  [2024-11-20 14:34:07.081620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:28.158  [2024-11-20 14:34:07.081638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:28.158  [2024-11-20 14:34:07.081652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:28.158  [2024-11-20 14:34:07.081672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.158  [2024-11-20 14:34:07.081686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:28.158  [2024-11-20 14:34:07.081704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.200 ms
00:23:28.158  [2024-11-20 14:34:07.081717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.158  [2024-11-20 14:34:07.116425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.158  [2024-11-20 14:34:07.116494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:28.158  [2024-11-20 14:34:07.116525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.612 ms
00:23:28.158  [2024-11-20 14:34:07.116545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.158  [2024-11-20 14:34:07.116760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.158  [2024-11-20 14:34:07.116782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:28.158  [2024-11-20 14:34:07.116803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:23:28.158  [2024-11-20 14:34:07.116827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.159483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.159555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:28.426  [2024-11-20 14:34:07.159592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 42.618 ms
00:23:28.426  [2024-11-20 14:34:07.159609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.159764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.159784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:28.426  [2024-11-20 14:34:07.159800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:28.426  [2024-11-20 14:34:07.159812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.160155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.160186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:28.426  [2024-11-20 14:34:07.160206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.311 ms
00:23:28.426  [2024-11-20 14:34:07.160218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.160379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.160419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:28.426  [2024-11-20 14:34:07.160440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.129 ms
00:23:28.426  [2024-11-20 14:34:07.160451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.179709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.179766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:28.426  [2024-11-20 14:34:07.179788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.221 ms
00:23:28.426  [2024-11-20 14:34:07.179802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.204797] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:23:28.426  [2024-11-20 14:34:07.204874] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:28.426  [2024-11-20 14:34:07.204902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.204916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:28.426  [2024-11-20 14:34:07.204933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.933 ms
00:23:28.426  [2024-11-20 14:34:07.204945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.234980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.235047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:28.426  [2024-11-20 14:34:07.235070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.915 ms
00:23:28.426  [2024-11-20 14:34:07.235084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.251025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.251079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:28.426  [2024-11-20 14:34:07.251104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.819 ms
00:23:28.426  [2024-11-20 14:34:07.251117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.266920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.266964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:28.426  [2024-11-20 14:34:07.266985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.692 ms
00:23:28.426  [2024-11-20 14:34:07.266997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.267943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.267984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:28.426  [2024-11-20 14:34:07.268007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.773 ms
00:23:28.426  [2024-11-20 14:34:07.268022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.342727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.342801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:28.426  [2024-11-20 14:34:07.342831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 74.658 ms
00:23:28.426  [2024-11-20 14:34:07.342846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.355762] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:28.426  [2024-11-20 14:34:07.369840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.369937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:28.426  [2024-11-20 14:34:07.369967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.824 ms
00:23:28.426  [2024-11-20 14:34:07.369987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.370159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.370188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:28.426  [2024-11-20 14:34:07.370203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:23:28.426  [2024-11-20 14:34:07.370222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.370297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.370322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:28.426  [2024-11-20 14:34:07.370337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.044 ms
00:23:28.426  [2024-11-20 14:34:07.370362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.370397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.370418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:28.426  [2024-11-20 14:34:07.370433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:23:28.426  [2024-11-20 14:34:07.370454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.426  [2024-11-20 14:34:07.370505] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:28.426  [2024-11-20 14:34:07.370534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.426  [2024-11-20 14:34:07.370547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:28.426  [2024-11-20 14:34:07.370599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:23:28.426  [2024-11-20 14:34:07.370618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.683  [2024-11-20 14:34:07.412344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.683  [2024-11-20 14:34:07.412418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:28.683  [2024-11-20 14:34:07.412449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 41.668 ms
00:23:28.683  [2024-11-20 14:34:07.412464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.683  [2024-11-20 14:34:07.412644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.683  [2024-11-20 14:34:07.412668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:28.683  [2024-11-20 14:34:07.412689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:23:28.683  [2024-11-20 14:34:07.412708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.683  [2024-11-20 14:34:07.413795] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:28.683  [2024-11-20 14:34:07.417926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.916 ms, result 0
00:23:28.683  [2024-11-20 14:34:07.418966] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:28.683  Some configs were skipped because the RPC state that can call them passed over.
00:23:28.683   14:34:07 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024
00:23:28.941  [2024-11-20 14:34:07.729074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:28.941  [2024-11-20 14:34:07.729160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:23:28.941  [2024-11-20 14:34:07.729183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.512 ms
00:23:28.941  [2024-11-20 14:34:07.729203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:28.941  [2024-11-20 14:34:07.729257] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.698 ms, result 0
00:23:28.941  true
00:23:28.941   14:34:07 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024
00:23:29.199  [2024-11-20 14:34:07.981006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:29.199  [2024-11-20 14:34:07.981088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:23:29.199  [2024-11-20 14:34:07.981135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.104 ms
00:23:29.199  [2024-11-20 14:34:07.981160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:29.199  [2024-11-20 14:34:07.981292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.376 ms, result 0
00:23:29.199  true
00:23:29.199   14:34:07 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78832
00:23:29.199   14:34:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78832 ']'
00:23:29.199   14:34:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78832
00:23:29.199    14:34:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:23:29.199   14:34:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:29.199    14:34:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78832
00:23:29.199  killing process with pid 78832
00:23:29.199   14:34:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:29.199   14:34:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:29.199   14:34:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78832'
00:23:29.199   14:34:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78832
00:23:29.199   14:34:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78832
00:23:30.133  [2024-11-20 14:34:08.983276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.133  [2024-11-20 14:34:08.983365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:30.133  [2024-11-20 14:34:08.983388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:30.133  [2024-11-20 14:34:08.983403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.133  [2024-11-20 14:34:08.983457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:30.133  [2024-11-20 14:34:08.986880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.133  [2024-11-20 14:34:08.986933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:30.133  [2024-11-20 14:34:08.986958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.391 ms
00:23:30.133  [2024-11-20 14:34:08.986971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.133  [2024-11-20 14:34:08.987341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.133  [2024-11-20 14:34:08.987376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:30.133  [2024-11-20 14:34:08.987395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.296 ms
00:23:30.133  [2024-11-20 14:34:08.987422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.133  [2024-11-20 14:34:08.991763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.133  [2024-11-20 14:34:08.991814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:30.134  [2024-11-20 14:34:08.991852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.305 ms
00:23:30.134  [2024-11-20 14:34:08.991865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:08.999755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:08.999809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:30.134  [2024-11-20 14:34:08.999831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.818 ms
00:23:30.134  [2024-11-20 14:34:08.999845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.012690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.012751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:30.134  [2024-11-20 14:34:09.012778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.719 ms
00:23:30.134  [2024-11-20 14:34:09.012805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.021203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.021255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:30.134  [2024-11-20 14:34:09.021276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.313 ms
00:23:30.134  [2024-11-20 14:34:09.021289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.021458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.021478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:30.134  [2024-11-20 14:34:09.021495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.100 ms
00:23:30.134  [2024-11-20 14:34:09.021507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.034793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.034839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:30.134  [2024-11-20 14:34:09.034860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 13.254 ms
00:23:30.134  [2024-11-20 14:34:09.034872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.047550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.047604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:30.134  [2024-11-20 14:34:09.047636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.596 ms
00:23:30.134  [2024-11-20 14:34:09.047651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.059743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.059783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:30.134  [2024-11-20 14:34:09.059811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.014 ms
00:23:30.134  [2024-11-20 14:34:09.059825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.072133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.134  [2024-11-20 14:34:09.072188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:30.134  [2024-11-20 14:34:09.072212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.194 ms
00:23:30.134  [2024-11-20 14:34:09.072226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.134  [2024-11-20 14:34:09.072300] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:30.134  [2024-11-20 14:34:09.072326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.134  [2024-11-20 14:34:09.072923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.072936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.072950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.072962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.072978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.072991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:30.135  [2024-11-20 14:34:09.073780] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:30.135  [2024-11-20 14:34:09.073811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:30.135  [2024-11-20 14:34:09.073842] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:30.135  [2024-11-20 14:34:09.073867] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:30.135  [2024-11-20 14:34:09.073880] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:30.135  [2024-11-20 14:34:09.073897] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:30.135  [2024-11-20 14:34:09.073910] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:30.135  [2024-11-20 14:34:09.073927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:30.135  [2024-11-20 14:34:09.073941] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:30.135  [2024-11-20 14:34:09.073957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:30.135  [2024-11-20 14:34:09.073968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:30.135  [2024-11-20 14:34:09.073986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.135  [2024-11-20 14:34:09.073999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:30.135  [2024-11-20 14:34:09.074018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.691 ms
00:23:30.135  [2024-11-20 14:34:09.074032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.135  [2024-11-20 14:34:09.090785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.135  [2024-11-20 14:34:09.090836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:30.135  [2024-11-20 14:34:09.090867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.687 ms
00:23:30.135  [2024-11-20 14:34:09.090882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.135  [2024-11-20 14:34:09.091400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:30.135  [2024-11-20 14:34:09.091442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:30.136  [2024-11-20 14:34:09.091466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.406 ms
00:23:30.136  [2024-11-20 14:34:09.091485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.393  [2024-11-20 14:34:09.150731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.393  [2024-11-20 14:34:09.150805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:30.393  [2024-11-20 14:34:09.150833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.393  [2024-11-20 14:34:09.150847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.393  [2024-11-20 14:34:09.151002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.393  [2024-11-20 14:34:09.151022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:30.393  [2024-11-20 14:34:09.151041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.151061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.151144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.151164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:30.394  [2024-11-20 14:34:09.151189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.151202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.151236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.151251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:30.394  [2024-11-20 14:34:09.151266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.151278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.255896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.255964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:30.394  [2024-11-20 14:34:09.255988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.256001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:30.394  [2024-11-20 14:34:09.342159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:30.394  [2024-11-20 14:34:09.342332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:30.394  [2024-11-20 14:34:09.342411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:30.394  [2024-11-20 14:34:09.342622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:30.394  [2024-11-20 14:34:09.342728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:30.394  [2024-11-20 14:34:09.342867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.342945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:30.394  [2024-11-20 14:34:09.342964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:30.394  [2024-11-20 14:34:09.342983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:30.394  [2024-11-20 14:34:09.342996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:30.394  [2024-11-20 14:34:09.343174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 359.869 ms, result 0
00:23:31.328   14:34:10 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data
00:23:31.328   14:34:10 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:31.586  [2024-11-20 14:34:10.360259] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:23:31.586  [2024-11-20 14:34:10.360438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78896 ]
00:23:31.586  [2024-11-20 14:34:10.533666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:31.844  [2024-11-20 14:34:10.638221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:32.102  [2024-11-20 14:34:10.964976] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:32.102  [2024-11-20 14:34:10.965059] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:32.363  [2024-11-20 14:34:11.126898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.126971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:32.363  [2024-11-20 14:34:11.126992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:32.363  [2024-11-20 14:34:11.127005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.130312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.130361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:32.363  [2024-11-20 14:34:11.130378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.278 ms
00:23:32.363  [2024-11-20 14:34:11.130390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.130519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:32.363  [2024-11-20 14:34:11.131487] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:32.363  [2024-11-20 14:34:11.131535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.131549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:32.363  [2024-11-20 14:34:11.131562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.028 ms
00:23:32.363  [2024-11-20 14:34:11.131589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.132976] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:32.363  [2024-11-20 14:34:11.149279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.149349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:32.363  [2024-11-20 14:34:11.149369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.304 ms
00:23:32.363  [2024-11-20 14:34:11.149381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.149539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.149561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:32.363  [2024-11-20 14:34:11.149629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:23:32.363  [2024-11-20 14:34:11.149653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.154021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.154078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:32.363  [2024-11-20 14:34:11.154096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.292 ms
00:23:32.363  [2024-11-20 14:34:11.154108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.154256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.154278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:32.363  [2024-11-20 14:34:11.154291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:23:32.363  [2024-11-20 14:34:11.154301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.154346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.154377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:32.363  [2024-11-20 14:34:11.154398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:23:32.363  [2024-11-20 14:34:11.154417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.154458] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:32.363  [2024-11-20 14:34:11.158807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.158846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:32.363  [2024-11-20 14:34:11.158862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.359 ms
00:23:32.363  [2024-11-20 14:34:11.158873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.158949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.158967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:32.363  [2024-11-20 14:34:11.158979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:23:32.363  [2024-11-20 14:34:11.158991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.159023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:32.363  [2024-11-20 14:34:11.159064] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:32.363  [2024-11-20 14:34:11.159124] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:32.363  [2024-11-20 14:34:11.159152] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:32.363  [2024-11-20 14:34:11.159276] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:32.363  [2024-11-20 14:34:11.159304] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:32.363  [2024-11-20 14:34:11.159330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:32.363  [2024-11-20 14:34:11.159347] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:32.363  [2024-11-20 14:34:11.159366] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:32.363  [2024-11-20 14:34:11.159379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:32.363  [2024-11-20 14:34:11.159389] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:32.363  [2024-11-20 14:34:11.159418] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:32.363  [2024-11-20 14:34:11.159439] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:32.363  [2024-11-20 14:34:11.159453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.159464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:32.363  [2024-11-20 14:34:11.159483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.433 ms
00:23:32.363  [2024-11-20 14:34:11.159503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.159673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.363  [2024-11-20 14:34:11.159701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:32.363  [2024-11-20 14:34:11.159714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.084 ms
00:23:32.363  [2024-11-20 14:34:11.159724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.363  [2024-11-20 14:34:11.159870] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:32.363  [2024-11-20 14:34:11.159894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:32.364  [2024-11-20 14:34:11.159908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:32.364  [2024-11-20 14:34:11.159926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.159946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:32.364  [2024-11-20 14:34:11.159961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.159972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:32.364  [2024-11-20 14:34:11.159982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:32.364  [2024-11-20 14:34:11.159993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:32.364  [2024-11-20 14:34:11.160013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:32.364  [2024-11-20 14:34:11.160028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:32.364  [2024-11-20 14:34:11.160040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:32.364  [2024-11-20 14:34:11.160079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:32.364  [2024-11-20 14:34:11.160099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:32.364  [2024-11-20 14:34:11.160110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:32.364  [2024-11-20 14:34:11.160131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:32.364  [2024-11-20 14:34:11.160174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:32.364  [2024-11-20 14:34:11.160216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:32.364  [2024-11-20 14:34:11.160252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:32.364  [2024-11-20 14:34:11.160289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:32.364  [2024-11-20 14:34:11.160344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:32.364  [2024-11-20 14:34:11.160367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:32.364  [2024-11-20 14:34:11.160377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:32.364  [2024-11-20 14:34:11.160388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:32.364  [2024-11-20 14:34:11.160403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:32.364  [2024-11-20 14:34:11.160413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:32.364  [2024-11-20 14:34:11.160423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:32.364  [2024-11-20 14:34:11.160452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:32.364  [2024-11-20 14:34:11.160473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160488] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:32.364  [2024-11-20 14:34:11.160499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:32.364  [2024-11-20 14:34:11.160510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:32.364  [2024-11-20 14:34:11.160538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:32.364  [2024-11-20 14:34:11.160548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:32.364  [2024-11-20 14:34:11.160563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:32.364  [2024-11-20 14:34:11.160601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:32.364  [2024-11-20 14:34:11.160622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:32.364  [2024-11-20 14:34:11.160638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:32.364  [2024-11-20 14:34:11.160651] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:32.364  [2024-11-20 14:34:11.160665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:32.364  [2024-11-20 14:34:11.160677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:32.364  [2024-11-20 14:34:11.160694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:32.364  [2024-11-20 14:34:11.160706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:32.364  [2024-11-20 14:34:11.160717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:32.364  [2024-11-20 14:34:11.160733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:32.364  [2024-11-20 14:34:11.160752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:32.364  [2024-11-20 14:34:11.160764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:32.364  [2024-11-20 14:34:11.160775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:32.364  [2024-11-20 14:34:11.160788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:32.364  [2024-11-20 14:34:11.160799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:32.364  [2024-11-20 14:34:11.160810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:32.364  [2024-11-20 14:34:11.160821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:32.364  [2024-11-20 14:34:11.160832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:32.364  [2024-11-20 14:34:11.160844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:32.364  [2024-11-20 14:34:11.160860] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:32.365  [2024-11-20 14:34:11.160873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:32.365  [2024-11-20 14:34:11.160885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:32.365  [2024-11-20 14:34:11.160896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:32.365  [2024-11-20 14:34:11.160907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:32.365  [2024-11-20 14:34:11.160918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:32.365  [2024-11-20 14:34:11.160930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.160941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:32.365  [2024-11-20 14:34:11.160967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.139 ms
00:23:32.365  [2024-11-20 14:34:11.160986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.193667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.193730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:32.365  [2024-11-20 14:34:11.193752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.590 ms
00:23:32.365  [2024-11-20 14:34:11.193764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.193958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.193990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:32.365  [2024-11-20 14:34:11.194021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:23:32.365  [2024-11-20 14:34:11.194039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.263316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.263399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:32.365  [2024-11-20 14:34:11.263449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 69.226 ms
00:23:32.365  [2024-11-20 14:34:11.263483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.263761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.263816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:32.365  [2024-11-20 14:34:11.263848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:23:32.365  [2024-11-20 14:34:11.263871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.264349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.264412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:32.365  [2024-11-20 14:34:11.264446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.416 ms
00:23:32.365  [2024-11-20 14:34:11.264486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.264798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.264856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:32.365  [2024-11-20 14:34:11.264888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.249 ms
00:23:32.365  [2024-11-20 14:34:11.264912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.285459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.285533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:32.365  [2024-11-20 14:34:11.285593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.484 ms
00:23:32.365  [2024-11-20 14:34:11.285622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.305706] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:23:32.365  [2024-11-20 14:34:11.305781] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:32.365  [2024-11-20 14:34:11.305822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.305846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:32.365  [2024-11-20 14:34:11.305872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.922 ms
00:23:32.365  [2024-11-20 14:34:11.305894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.365  [2024-11-20 14:34:11.342537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.365  [2024-11-20 14:34:11.342650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:32.365  [2024-11-20 14:34:11.342686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.431 ms
00:23:32.365  [2024-11-20 14:34:11.342711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.624  [2024-11-20 14:34:11.362296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.624  [2024-11-20 14:34:11.362367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:32.624  [2024-11-20 14:34:11.362401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.345 ms
00:23:32.624  [2024-11-20 14:34:11.362424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.624  [2024-11-20 14:34:11.381889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.624  [2024-11-20 14:34:11.381961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:32.624  [2024-11-20 14:34:11.381997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.150 ms
00:23:32.624  [2024-11-20 14:34:11.382021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.624  [2024-11-20 14:34:11.383191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.624  [2024-11-20 14:34:11.383240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:32.624  [2024-11-20 14:34:11.383271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.894 ms
00:23:32.624  [2024-11-20 14:34:11.383296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.624  [2024-11-20 14:34:11.459812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.624  [2024-11-20 14:34:11.459891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:32.624  [2024-11-20 14:34:11.459923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 76.455 ms
00:23:32.624  [2024-11-20 14:34:11.459942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.624  [2024-11-20 14:34:11.472869] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:32.624  [2024-11-20 14:34:11.486960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.624  [2024-11-20 14:34:11.487038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:32.624  [2024-11-20 14:34:11.487071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.804 ms
00:23:32.625  [2024-11-20 14:34:11.487104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.487342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.625  [2024-11-20 14:34:11.487374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:32.625  [2024-11-20 14:34:11.487397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:23:32.625  [2024-11-20 14:34:11.487433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.487536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.625  [2024-11-20 14:34:11.487566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:32.625  [2024-11-20 14:34:11.487620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:23:32.625  [2024-11-20 14:34:11.487640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.487723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.625  [2024-11-20 14:34:11.487752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:32.625  [2024-11-20 14:34:11.487775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.026 ms
00:23:32.625  [2024-11-20 14:34:11.487794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.487861] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:32.625  [2024-11-20 14:34:11.487889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.625  [2024-11-20 14:34:11.487911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:32.625  [2024-11-20 14:34:11.487931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:23:32.625  [2024-11-20 14:34:11.487950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.519361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.625  [2024-11-20 14:34:11.519433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:32.625  [2024-11-20 14:34:11.519466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.360 ms
00:23:32.625  [2024-11-20 14:34:11.519486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.519719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:32.625  [2024-11-20 14:34:11.519751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:32.625  [2024-11-20 14:34:11.519775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.063 ms
00:23:32.625  [2024-11-20 14:34:11.519793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:32.625  [2024-11-20 14:34:11.521081] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:32.625  [2024-11-20 14:34:11.525426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.662 ms, result 0
00:23:32.625  [2024-11-20 14:34:11.526272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:32.625  [2024-11-20 14:34:11.542741] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:33.644  
[2024-11-20T14:34:13.560Z] Copying: 28/256 [MB] (28 MBps)
[2024-11-20T14:34:14.932Z] Copying: 52/256 [MB] (24 MBps)
[2024-11-20T14:34:15.865Z] Copying: 76/256 [MB] (23 MBps)
[2024-11-20T14:34:16.798Z] Copying: 97/256 [MB] (21 MBps)
[2024-11-20T14:34:17.769Z] Copying: 118/256 [MB] (21 MBps)
[2024-11-20T14:34:18.704Z] Copying: 140/256 [MB] (21 MBps)
[2024-11-20T14:34:19.640Z] Copying: 163/256 [MB] (22 MBps)
[2024-11-20T14:34:20.574Z] Copying: 185/256 [MB] (22 MBps)
[2024-11-20T14:34:22.004Z] Copying: 209/256 [MB] (23 MBps)
[2024-11-20T14:34:22.585Z] Copying: 232/256 [MB] (22 MBps)
[2024-11-20T14:34:22.585Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-20 14:34:22.481870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:43.603  [2024-11-20 14:34:22.494382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.494454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:43.603  [2024-11-20 14:34:22.494476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:43.603  [2024-11-20 14:34:22.494505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.494540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:43.603  [2024-11-20 14:34:22.498030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.498075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:43.603  [2024-11-20 14:34:22.498092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.467 ms
00:23:43.603  [2024-11-20 14:34:22.498105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.498453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.498511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:43.603  [2024-11-20 14:34:22.498531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.311 ms
00:23:43.603  [2024-11-20 14:34:22.498542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.502405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.502465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:43.603  [2024-11-20 14:34:22.502482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.835 ms
00:23:43.603  [2024-11-20 14:34:22.502494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.510120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.510165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:43.603  [2024-11-20 14:34:22.510181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.594 ms
00:23:43.603  [2024-11-20 14:34:22.510193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.541980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.542048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:43.603  [2024-11-20 14:34:22.542069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.693 ms
00:23:43.603  [2024-11-20 14:34:22.542081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.561052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.561138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:43.603  [2024-11-20 14:34:22.561170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.876 ms
00:23:43.603  [2024-11-20 14:34:22.561183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.603  [2024-11-20 14:34:22.561375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.603  [2024-11-20 14:34:22.561398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:43.603  [2024-11-20 14:34:22.561412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.098 ms
00:23:43.603  [2024-11-20 14:34:22.561425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.888  [2024-11-20 14:34:22.593208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.888  [2024-11-20 14:34:22.593277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:43.888  [2024-11-20 14:34:22.593298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.695 ms
00:23:43.888  [2024-11-20 14:34:22.593311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.888  [2024-11-20 14:34:22.624673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.888  [2024-11-20 14:34:22.624734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:43.888  [2024-11-20 14:34:22.624756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.279 ms
00:23:43.888  [2024-11-20 14:34:22.624768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.888  [2024-11-20 14:34:22.656462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.888  [2024-11-20 14:34:22.656534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:43.888  [2024-11-20 14:34:22.656555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.607 ms
00:23:43.888  [2024-11-20 14:34:22.656578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.888  [2024-11-20 14:34:22.689147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.888  [2024-11-20 14:34:22.689222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:43.888  [2024-11-20 14:34:22.689244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.408 ms
00:23:43.888  [2024-11-20 14:34:22.689256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.888  [2024-11-20 14:34:22.689384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:43.888  [2024-11-20 14:34:22.689411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.689996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.888  [2024-11-20 14:34:22.690533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.690990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:43.889  [2024-11-20 14:34:22.691270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:43.889  [2024-11-20 14:34:22.691293] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:43.889  [2024-11-20 14:34:22.691317] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:43.889  [2024-11-20 14:34:22.691337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:43.889  [2024-11-20 14:34:22.691349] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:43.889  [2024-11-20 14:34:22.691360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:43.889  [2024-11-20 14:34:22.691371] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:43.889  [2024-11-20 14:34:22.691390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:43.889  [2024-11-20 14:34:22.691409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:43.889  [2024-11-20 14:34:22.691443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:43.889  [2024-11-20 14:34:22.691463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:43.889  [2024-11-20 14:34:22.691481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.889  [2024-11-20 14:34:22.691520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:43.889  [2024-11-20 14:34:22.691545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.099 ms
00:23:43.889  [2024-11-20 14:34:22.691583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.708917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.889  [2024-11-20 14:34:22.708987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:43.889  [2024-11-20 14:34:22.709009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.282 ms
00:23:43.889  [2024-11-20 14:34:22.709022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.709713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:43.889  [2024-11-20 14:34:22.709751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:43.889  [2024-11-20 14:34:22.709766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.562 ms
00:23:43.889  [2024-11-20 14:34:22.709778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.756651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:43.889  [2024-11-20 14:34:22.756724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:43.889  [2024-11-20 14:34:22.756746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:43.889  [2024-11-20 14:34:22.756759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.756907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:43.889  [2024-11-20 14:34:22.756926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:43.889  [2024-11-20 14:34:22.756938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:43.889  [2024-11-20 14:34:22.756949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.757025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:43.889  [2024-11-20 14:34:22.757052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:43.889  [2024-11-20 14:34:22.757073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:43.889  [2024-11-20 14:34:22.757089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.757129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:43.889  [2024-11-20 14:34:22.757163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:43.889  [2024-11-20 14:34:22.757181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:43.889  [2024-11-20 14:34:22.757191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:43.889  [2024-11-20 14:34:22.862723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:43.889  [2024-11-20 14:34:22.862801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:43.889  [2024-11-20 14:34:22.862823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:43.889  [2024-11-20 14:34:22.862835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.147  [2024-11-20 14:34:22.949815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.147  [2024-11-20 14:34:22.949890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:44.147  [2024-11-20 14:34:22.949913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.147  [2024-11-20 14:34:22.949925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.147  [2024-11-20 14:34:22.950018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.147  [2024-11-20 14:34:22.950035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:44.147  [2024-11-20 14:34:22.950047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.147  [2024-11-20 14:34:22.950058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.147  [2024-11-20 14:34:22.950098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.147  [2024-11-20 14:34:22.950129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:44.147  [2024-11-20 14:34:22.950177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.147  [2024-11-20 14:34:22.950195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.147  [2024-11-20 14:34:22.950372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.147  [2024-11-20 14:34:22.950420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:44.147  [2024-11-20 14:34:22.950449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.147  [2024-11-20 14:34:22.950472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.147  [2024-11-20 14:34:22.950557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.147  [2024-11-20 14:34:22.950645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:44.147  [2024-11-20 14:34:22.950672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.147  [2024-11-20 14:34:22.950716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.147  [2024-11-20 14:34:22.950799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.148  [2024-11-20 14:34:22.950823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:44.148  [2024-11-20 14:34:22.950837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.148  [2024-11-20 14:34:22.950850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.148  [2024-11-20 14:34:22.950934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:44.148  [2024-11-20 14:34:22.950963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:44.148  [2024-11-20 14:34:22.951004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:44.148  [2024-11-20 14:34:22.951017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:44.148  [2024-11-20 14:34:22.951299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.920 ms, result 0
00:23:45.081  
00:23:45.081  
00:23:45.081   14:34:23 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero
00:23:45.081   14:34:23 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data
00:23:45.648   14:34:24 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:45.906  [2024-11-20 14:34:24.649510] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:23:45.906  [2024-11-20 14:34:24.649764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79044 ]
00:23:45.906  [2024-11-20 14:34:24.826943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:46.163  [2024-11-20 14:34:24.933260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:46.421  [2024-11-20 14:34:25.257940] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:46.421  [2024-11-20 14:34:25.258026] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:46.680  [2024-11-20 14:34:25.419707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.419780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:46.681  [2024-11-20 14:34:25.419800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:23:46.681  [2024-11-20 14:34:25.419812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.423066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.423111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:46.681  [2024-11-20 14:34:25.423127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.224 ms
00:23:46.681  [2024-11-20 14:34:25.423139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.423262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:46.681  [2024-11-20 14:34:25.424224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:46.681  [2024-11-20 14:34:25.424269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.424283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:46.681  [2024-11-20 14:34:25.424295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.018 ms
00:23:46.681  [2024-11-20 14:34:25.424306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.425453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:46.681  [2024-11-20 14:34:25.441740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.441796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:46.681  [2024-11-20 14:34:25.441815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.287 ms
00:23:46.681  [2024-11-20 14:34:25.441826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.441957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.441979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:46.681  [2024-11-20 14:34:25.441991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.027 ms
00:23:46.681  [2024-11-20 14:34:25.442003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.446408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.446461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:46.681  [2024-11-20 14:34:25.446478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.345 ms
00:23:46.681  [2024-11-20 14:34:25.446489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.446660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.446683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:46.681  [2024-11-20 14:34:25.446697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.096 ms
00:23:46.681  [2024-11-20 14:34:25.446709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.446749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.446769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:46.681  [2024-11-20 14:34:25.446781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:23:46.681  [2024-11-20 14:34:25.446792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.446824] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:46.681  [2024-11-20 14:34:25.451086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.451123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:46.681  [2024-11-20 14:34:25.451139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.271 ms
00:23:46.681  [2024-11-20 14:34:25.451150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.451229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.451248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:46.681  [2024-11-20 14:34:25.451260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:23:46.681  [2024-11-20 14:34:25.451270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.451303] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:46.681  [2024-11-20 14:34:25.451337] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:46.681  [2024-11-20 14:34:25.451380] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:46.681  [2024-11-20 14:34:25.451400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:46.681  [2024-11-20 14:34:25.451524] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:46.681  [2024-11-20 14:34:25.451540] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:46.681  [2024-11-20 14:34:25.451555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:46.681  [2024-11-20 14:34:25.451582] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:46.681  [2024-11-20 14:34:25.451605] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:46.681  [2024-11-20 14:34:25.451617] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:46.681  [2024-11-20 14:34:25.451628] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:46.681  [2024-11-20 14:34:25.451638] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:46.681  [2024-11-20 14:34:25.451648] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:46.681  [2024-11-20 14:34:25.451660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.451671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:46.681  [2024-11-20 14:34:25.451682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.360 ms
00:23:46.681  [2024-11-20 14:34:25.451693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.451795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.681  [2024-11-20 14:34:25.451815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:46.681  [2024-11-20 14:34:25.451826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:23:46.681  [2024-11-20 14:34:25.451837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.681  [2024-11-20 14:34:25.451979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:46.681  [2024-11-20 14:34:25.451998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:46.681  [2024-11-20 14:34:25.452011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:46.681  [2024-11-20 14:34:25.452043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:46.681  [2024-11-20 14:34:25.452075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:46.681  [2024-11-20 14:34:25.452094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:46.681  [2024-11-20 14:34:25.452104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:46.681  [2024-11-20 14:34:25.452113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:46.681  [2024-11-20 14:34:25.452137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:46.681  [2024-11-20 14:34:25.452148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:46.681  [2024-11-20 14:34:25.452158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:46.681  [2024-11-20 14:34:25.452180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:46.681  [2024-11-20 14:34:25.452210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:46.681  [2024-11-20 14:34:25.452239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:46.681  [2024-11-20 14:34:25.452269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:46.681  [2024-11-20 14:34:25.452298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:46.681  [2024-11-20 14:34:25.452317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:46.681  [2024-11-20 14:34:25.452327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:46.681  [2024-11-20 14:34:25.452337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:46.682  [2024-11-20 14:34:25.452347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:46.682  [2024-11-20 14:34:25.452356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:46.682  [2024-11-20 14:34:25.452366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:46.682  [2024-11-20 14:34:25.452376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:46.682  [2024-11-20 14:34:25.452386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:46.682  [2024-11-20 14:34:25.452395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:46.682  [2024-11-20 14:34:25.452405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:46.682  [2024-11-20 14:34:25.452415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:46.682  [2024-11-20 14:34:25.452424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:46.682  [2024-11-20 14:34:25.452434] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:46.682  [2024-11-20 14:34:25.452445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:46.682  [2024-11-20 14:34:25.452455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:46.682  [2024-11-20 14:34:25.452470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:46.682  [2024-11-20 14:34:25.452481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:46.682  [2024-11-20 14:34:25.452493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:46.682  [2024-11-20 14:34:25.452503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:46.682  [2024-11-20 14:34:25.452513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:46.682  [2024-11-20 14:34:25.452522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:46.682  [2024-11-20 14:34:25.452533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:46.682  [2024-11-20 14:34:25.452545] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:46.682  [2024-11-20 14:34:25.452558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:46.682  [2024-11-20 14:34:25.452600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:46.682  [2024-11-20 14:34:25.452610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:46.682  [2024-11-20 14:34:25.452622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:46.682  [2024-11-20 14:34:25.452632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:46.682  [2024-11-20 14:34:25.452644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:46.682  [2024-11-20 14:34:25.452654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:46.682  [2024-11-20 14:34:25.452665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:46.682  [2024-11-20 14:34:25.452676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:46.682  [2024-11-20 14:34:25.452687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:46.682  [2024-11-20 14:34:25.452741] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:46.682  [2024-11-20 14:34:25.452753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:46.682  [2024-11-20 14:34:25.452775] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:46.682  [2024-11-20 14:34:25.452786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:46.682  [2024-11-20 14:34:25.452797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:46.682  [2024-11-20 14:34:25.452809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.452820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:46.682  [2024-11-20 14:34:25.452837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.901 ms
00:23:46.682  [2024-11-20 14:34:25.452848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.485502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.485564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:46.682  [2024-11-20 14:34:25.485598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.579 ms
00:23:46.682  [2024-11-20 14:34:25.485611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.485806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.485833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:46.682  [2024-11-20 14:34:25.485847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.068 ms
00:23:46.682  [2024-11-20 14:34:25.485858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.535487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.535553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:46.682  [2024-11-20 14:34:25.535584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 49.595 ms
00:23:46.682  [2024-11-20 14:34:25.535606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.535771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.535791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:46.682  [2024-11-20 14:34:25.535804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:23:46.682  [2024-11-20 14:34:25.535815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.536143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.536173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:46.682  [2024-11-20 14:34:25.536187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.295 ms
00:23:46.682  [2024-11-20 14:34:25.536206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.536365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.536389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:46.682  [2024-11-20 14:34:25.536402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.126 ms
00:23:46.682  [2024-11-20 14:34:25.536413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.553303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.553362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:46.682  [2024-11-20 14:34:25.553381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.858 ms
00:23:46.682  [2024-11-20 14:34:25.553393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.569819] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3
00:23:46.682  [2024-11-20 14:34:25.569863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:46.682  [2024-11-20 14:34:25.569882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.569894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:46.682  [2024-11-20 14:34:25.569907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.317 ms
00:23:46.682  [2024-11-20 14:34:25.569919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.599740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.599797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:46.682  [2024-11-20 14:34:25.599813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.725 ms
00:23:46.682  [2024-11-20 14:34:25.599826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.615550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.615608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:46.682  [2024-11-20 14:34:25.615626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.620 ms
00:23:46.682  [2024-11-20 14:34:25.615638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.631153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.631196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:46.682  [2024-11-20 14:34:25.631212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.415 ms
00:23:46.682  [2024-11-20 14:34:25.631223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.682  [2024-11-20 14:34:25.632057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.682  [2024-11-20 14:34:25.632096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:46.682  [2024-11-20 14:34:25.632112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.707 ms
00:23:46.682  [2024-11-20 14:34:25.632123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.704553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.704631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:46.941  [2024-11-20 14:34:25.704651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 72.394 ms
00:23:46.941  [2024-11-20 14:34:25.704663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.717418] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:46.941  [2024-11-20 14:34:25.731372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.731459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:46.941  [2024-11-20 14:34:25.731479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.556 ms
00:23:46.941  [2024-11-20 14:34:25.731498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.731656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.731678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:46.941  [2024-11-20 14:34:25.731692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:23:46.941  [2024-11-20 14:34:25.731704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.731772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.731789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:46.941  [2024-11-20 14:34:25.731801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:23:46.941  [2024-11-20 14:34:25.731818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.731864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.731881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:46.941  [2024-11-20 14:34:25.731893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:23:46.941  [2024-11-20 14:34:25.731904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.731946] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:46.941  [2024-11-20 14:34:25.731968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.731980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:46.941  [2024-11-20 14:34:25.731992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.024 ms
00:23:46.941  [2024-11-20 14:34:25.732002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.764098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.764158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:46.941  [2024-11-20 14:34:25.764177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.061 ms
00:23:46.941  [2024-11-20 14:34:25.764189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.764347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:46.941  [2024-11-20 14:34:25.764369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:46.941  [2024-11-20 14:34:25.764382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.048 ms
00:23:46.941  [2024-11-20 14:34:25.764399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:46.941  [2024-11-20 14:34:25.765507] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:46.941  [2024-11-20 14:34:25.769658] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.468 ms, result 0
00:23:46.941  [2024-11-20 14:34:25.770428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:46.941  [2024-11-20 14:34:25.786852] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:47.201  
[2024-11-20T14:34:26.183Z] Copying: 4096/4096 [kB] (average 27 MBps)[2024-11-20 14:34:25.937717] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:47.201  [2024-11-20 14:34:25.949980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:25.950024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:47.201  [2024-11-20 14:34:25.950053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:47.201  [2024-11-20 14:34:25.950065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:25.950096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:47.201  [2024-11-20 14:34:25.953411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:25.953445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:47.201  [2024-11-20 14:34:25.953461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.294 ms
00:23:47.201  [2024-11-20 14:34:25.953472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:25.955252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:25.955293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:47.201  [2024-11-20 14:34:25.955309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.721 ms
00:23:47.201  [2024-11-20 14:34:25.955321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:25.959304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:25.959342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:47.201  [2024-11-20 14:34:25.959357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.950 ms
00:23:47.201  [2024-11-20 14:34:25.959368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:25.966968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:25.967003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:47.201  [2024-11-20 14:34:25.967017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.560 ms
00:23:47.201  [2024-11-20 14:34:25.967028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:25.998166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:25.998222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:47.201  [2024-11-20 14:34:25.998240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.053 ms
00:23:47.201  [2024-11-20 14:34:25.998251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:26.016143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:26.016206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:47.201  [2024-11-20 14:34:26.016224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.819 ms
00:23:47.201  [2024-11-20 14:34:26.016237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:26.016407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:26.016427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:47.201  [2024-11-20 14:34:26.016441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.095 ms
00:23:47.201  [2024-11-20 14:34:26.016453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:26.048097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:26.048141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:47.201  [2024-11-20 14:34:26.048157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.604 ms
00:23:47.201  [2024-11-20 14:34:26.048168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:26.079082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:26.079124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:47.201  [2024-11-20 14:34:26.079141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.844 ms
00:23:47.201  [2024-11-20 14:34:26.079152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:26.109931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:26.109973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:47.201  [2024-11-20 14:34:26.109989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.715 ms
00:23:47.201  [2024-11-20 14:34:26.110000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.201  [2024-11-20 14:34:26.141295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.201  [2024-11-20 14:34:26.141360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:47.201  [2024-11-20 14:34:26.141379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.193 ms
00:23:47.202  [2024-11-20 14:34:26.141390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.202  [2024-11-20 14:34:26.141476] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:47.202  [2024-11-20 14:34:26.141503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.141988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.202  [2024-11-20 14:34:26.142512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:47.203  [2024-11-20 14:34:26.142715] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:47.203  [2024-11-20 14:34:26.142726] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:47.203  [2024-11-20 14:34:26.142738] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:47.203  [2024-11-20 14:34:26.142748] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:47.203  [2024-11-20 14:34:26.142759] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:47.203  [2024-11-20 14:34:26.142770] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:47.203  [2024-11-20 14:34:26.142780] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:47.203  [2024-11-20 14:34:26.142790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:47.203  [2024-11-20 14:34:26.142807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:47.203  [2024-11-20 14:34:26.142817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:47.203  [2024-11-20 14:34:26.142827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:47.203  [2024-11-20 14:34:26.142838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.203  [2024-11-20 14:34:26.142849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:47.203  [2024-11-20 14:34:26.142861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.365 ms
00:23:47.203  [2024-11-20 14:34:26.142872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.203  [2024-11-20 14:34:26.159620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.203  [2024-11-20 14:34:26.159671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:47.203  [2024-11-20 14:34:26.159689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.719 ms
00:23:47.203  [2024-11-20 14:34:26.159701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.203  [2024-11-20 14:34:26.160171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:47.203  [2024-11-20 14:34:26.160203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:47.203  [2024-11-20 14:34:26.160218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.405 ms
00:23:47.203  [2024-11-20 14:34:26.160229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.206185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.206258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:47.462  [2024-11-20 14:34:26.206276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.206295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.206439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.206458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:47.462  [2024-11-20 14:34:26.206470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.206481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.206547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.206566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:47.462  [2024-11-20 14:34:26.206598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.206609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.206641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.206655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:47.462  [2024-11-20 14:34:26.206667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.206677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.311561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.311643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:47.462  [2024-11-20 14:34:26.311663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.311684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:47.462  [2024-11-20 14:34:26.399117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:47.462  [2024-11-20 14:34:26.399262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:47.462  [2024-11-20 14:34:26.399347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:47.462  [2024-11-20 14:34:26.399548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:47.462  [2024-11-20 14:34:26.399702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:47.462  [2024-11-20 14:34:26.399791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.399855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:47.462  [2024-11-20 14:34:26.399879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:47.462  [2024-11-20 14:34:26.399892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:47.462  [2024-11-20 14:34:26.399903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:47.462  [2024-11-20 14:34:26.400071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 450.087 ms, result 0
00:23:48.397  
00:23:48.397  
00:23:48.397  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:48.397   14:34:27 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79076
00:23:48.397   14:34:27 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init
00:23:48.397   14:34:27 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79076
00:23:48.397   14:34:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79076 ']'
00:23:48.397   14:34:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:48.397   14:34:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:48.397   14:34:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:48.397   14:34:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:48.397   14:34:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:23:48.655  [2024-11-20 14:34:27.485231] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:23:48.655  [2024-11-20 14:34:27.485409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79076 ]
00:23:48.913  [2024-11-20 14:34:27.670328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:48.913  [2024-11-20 14:34:27.774641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:49.846   14:34:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:49.846   14:34:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0
00:23:49.846   14:34:28 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config
00:23:50.104  [2024-11-20 14:34:28.907902] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:50.104  [2024-11-20 14:34:28.907994] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:50.104  [2024-11-20 14:34:29.073455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.104  [2024-11-20 14:34:29.073529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:50.104  [2024-11-20 14:34:29.073554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:23:50.104  [2024-11-20 14:34:29.073713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.104  [2024-11-20 14:34:29.077760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.104  [2024-11-20 14:34:29.077808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:50.104  [2024-11-20 14:34:29.077831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.993 ms
00:23:50.104  [2024-11-20 14:34:29.077844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.104  [2024-11-20 14:34:29.078148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:50.104  [2024-11-20 14:34:29.079122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:50.104  [2024-11-20 14:34:29.079168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.104  [2024-11-20 14:34:29.079183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:50.104  [2024-11-20 14:34:29.079200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.037 ms
00:23:50.104  [2024-11-20 14:34:29.079212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.104  [2024-11-20 14:34:29.080517] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:50.362  [2024-11-20 14:34:29.097291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.362  [2024-11-20 14:34:29.097361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:50.362  [2024-11-20 14:34:29.097383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.779 ms
00:23:50.362  [2024-11-20 14:34:29.097399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.362  [2024-11-20 14:34:29.097551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.362  [2024-11-20 14:34:29.097620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:50.362  [2024-11-20 14:34:29.097645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:23:50.362  [2024-11-20 14:34:29.097661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.362  [2024-11-20 14:34:29.102336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.362  [2024-11-20 14:34:29.102411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:50.362  [2024-11-20 14:34:29.102431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.598 ms
00:23:50.362  [2024-11-20 14:34:29.102447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.362  [2024-11-20 14:34:29.102639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.362  [2024-11-20 14:34:29.102667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:50.362  [2024-11-20 14:34:29.102682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.109 ms
00:23:50.362  [2024-11-20 14:34:29.102698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.102747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.102778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:50.363  [2024-11-20 14:34:29.102793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:23:50.363  [2024-11-20 14:34:29.102808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.102846] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:50.363  [2024-11-20 14:34:29.107232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.107275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:50.363  [2024-11-20 14:34:29.107296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.392 ms
00:23:50.363  [2024-11-20 14:34:29.107309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.107388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.107408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:50.363  [2024-11-20 14:34:29.107446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:23:50.363  [2024-11-20 14:34:29.107472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.107547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:50.363  [2024-11-20 14:34:29.107602] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:50.363  [2024-11-20 14:34:29.107677] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:50.363  [2024-11-20 14:34:29.107703] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:50.363  [2024-11-20 14:34:29.107820] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:50.363  [2024-11-20 14:34:29.107848] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:50.363  [2024-11-20 14:34:29.107877] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:50.363  [2024-11-20 14:34:29.107894] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:50.363  [2024-11-20 14:34:29.107911] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:50.363  [2024-11-20 14:34:29.107925] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:50.363  [2024-11-20 14:34:29.107939] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:50.363  [2024-11-20 14:34:29.107951] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:50.363  [2024-11-20 14:34:29.107967] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:50.363  [2024-11-20 14:34:29.107982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.107996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:50.363  [2024-11-20 14:34:29.108010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.444 ms
00:23:50.363  [2024-11-20 14:34:29.108025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.108132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.108157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:50.363  [2024-11-20 14:34:29.108171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:23:50.363  [2024-11-20 14:34:29.108186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.108302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:50.363  [2024-11-20 14:34:29.108331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:50.363  [2024-11-20 14:34:29.108345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:50.363  [2024-11-20 14:34:29.108387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:50.363  [2024-11-20 14:34:29.108429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:50.363  [2024-11-20 14:34:29.108454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:50.363  [2024-11-20 14:34:29.108468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:50.363  [2024-11-20 14:34:29.108479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:50.363  [2024-11-20 14:34:29.108492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:50.363  [2024-11-20 14:34:29.108504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:50.363  [2024-11-20 14:34:29.108518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:50.363  [2024-11-20 14:34:29.108543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:50.363  [2024-11-20 14:34:29.108618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:50.363  [2024-11-20 14:34:29.108662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:50.363  [2024-11-20 14:34:29.108699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:50.363  [2024-11-20 14:34:29.108737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:50.363  [2024-11-20 14:34:29.108774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:50.363  [2024-11-20 14:34:29.108800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:50.363  [2024-11-20 14:34:29.108814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:50.363  [2024-11-20 14:34:29.108825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:50.363  [2024-11-20 14:34:29.108839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:50.363  [2024-11-20 14:34:29.108851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:50.363  [2024-11-20 14:34:29.108867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:50.363  [2024-11-20 14:34:29.108893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:50.363  [2024-11-20 14:34:29.108905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108918] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:50.363  [2024-11-20 14:34:29.108934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:50.363  [2024-11-20 14:34:29.108948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:50.363  [2024-11-20 14:34:29.108960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:50.363  [2024-11-20 14:34:29.108974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:50.363  [2024-11-20 14:34:29.108986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:50.363  [2024-11-20 14:34:29.109000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:50.363  [2024-11-20 14:34:29.109012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:50.363  [2024-11-20 14:34:29.109025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:50.363  [2024-11-20 14:34:29.109037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:50.363  [2024-11-20 14:34:29.109074] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:50.363  [2024-11-20 14:34:29.109091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:50.363  [2024-11-20 14:34:29.109121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:50.363  [2024-11-20 14:34:29.109137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:50.363  [2024-11-20 14:34:29.109150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:50.363  [2024-11-20 14:34:29.109164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:50.363  [2024-11-20 14:34:29.109177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:50.363  [2024-11-20 14:34:29.109191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:50.363  [2024-11-20 14:34:29.109203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:50.363  [2024-11-20 14:34:29.109217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:50.363  [2024-11-20 14:34:29.109230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:50.363  [2024-11-20 14:34:29.109298] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:50.363  [2024-11-20 14:34:29.109311] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:50.363  [2024-11-20 14:34:29.109343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:50.363  [2024-11-20 14:34:29.109358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:50.363  [2024-11-20 14:34:29.109371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:50.363  [2024-11-20 14:34:29.109387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.109400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:50.363  [2024-11-20 14:34:29.109415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.153 ms
00:23:50.363  [2024-11-20 14:34:29.109427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.144084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.144155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:50.363  [2024-11-20 14:34:29.144185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 34.565 ms
00:23:50.363  [2024-11-20 14:34:29.144207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.144409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.144449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:50.363  [2024-11-20 14:34:29.144473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.067 ms
00:23:50.363  [2024-11-20 14:34:29.144487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.187882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.187955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:50.363  [2024-11-20 14:34:29.187980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 43.351 ms
00:23:50.363  [2024-11-20 14:34:29.187993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.188152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.188173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:50.363  [2024-11-20 14:34:29.188189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:50.363  [2024-11-20 14:34:29.188202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.188533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.188562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:50.363  [2024-11-20 14:34:29.188618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.298 ms
00:23:50.363  [2024-11-20 14:34:29.188636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.188810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.188838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:50.363  [2024-11-20 14:34:29.188855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.128 ms
00:23:50.363  [2024-11-20 14:34:29.188868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.208424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.208497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:50.363  [2024-11-20 14:34:29.208527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.514 ms
00:23:50.363  [2024-11-20 14:34:29.208543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.363  [2024-11-20 14:34:29.238672] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:23:50.363  [2024-11-20 14:34:29.238760] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:50.363  [2024-11-20 14:34:29.238794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.363  [2024-11-20 14:34:29.238810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:50.363  [2024-11-20 14:34:29.238833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.029 ms
00:23:50.364  [2024-11-20 14:34:29.238847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.364  [2024-11-20 14:34:29.270128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.364  [2024-11-20 14:34:29.270246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:50.364  [2024-11-20 14:34:29.270280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.049 ms
00:23:50.364  [2024-11-20 14:34:29.270296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.364  [2024-11-20 14:34:29.287301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.364  [2024-11-20 14:34:29.287392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:50.364  [2024-11-20 14:34:29.287447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.777 ms
00:23:50.364  [2024-11-20 14:34:29.287480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.364  [2024-11-20 14:34:29.304279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.364  [2024-11-20 14:34:29.304367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:50.364  [2024-11-20 14:34:29.304397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.558 ms
00:23:50.364  [2024-11-20 14:34:29.304412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.364  [2024-11-20 14:34:29.305396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.364  [2024-11-20 14:34:29.305437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:50.364  [2024-11-20 14:34:29.305459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.720 ms
00:23:50.364  [2024-11-20 14:34:29.305472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.384197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.384274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:50.622  [2024-11-20 14:34:29.384305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 78.675 ms
00:23:50.622  [2024-11-20 14:34:29.384321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.397550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:50.622  [2024-11-20 14:34:29.411994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.412236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:50.622  [2024-11-20 14:34:29.412272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 27.507 ms
00:23:50.622  [2024-11-20 14:34:29.412294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.412463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.412493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:50.622  [2024-11-20 14:34:29.412509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:23:50.622  [2024-11-20 14:34:29.412528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.412623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.412654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:50.622  [2024-11-20 14:34:29.412676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.063 ms
00:23:50.622  [2024-11-20 14:34:29.412695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.412732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.412756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:50.622  [2024-11-20 14:34:29.412772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:23:50.622  [2024-11-20 14:34:29.412792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.412846] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:50.622  [2024-11-20 14:34:29.412877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.412897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:50.622  [2024-11-20 14:34:29.412917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.020 ms
00:23:50.622  [2024-11-20 14:34:29.412936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.444716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.444777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:50.622  [2024-11-20 14:34:29.444808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.732 ms
00:23:50.622  [2024-11-20 14:34:29.444823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.444980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.622  [2024-11-20 14:34:29.445002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:50.622  [2024-11-20 14:34:29.445024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.046 ms
00:23:50.622  [2024-11-20 14:34:29.445037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.622  [2024-11-20 14:34:29.446230] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:50.622  [2024-11-20 14:34:29.450683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.300 ms, result 0
00:23:50.622  [2024-11-20 14:34:29.452002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:50.622  Some configs were skipped because the RPC state that can call them passed over.
00:23:50.622   14:34:29 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024
00:23:50.880  [2024-11-20 14:34:29.802216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:50.880  [2024-11-20 14:34:29.802293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:23:50.881  [2024-11-20 14:34:29.802316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.385 ms
00:23:50.881  [2024-11-20 14:34:29.802333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:50.881  [2024-11-20 14:34:29.802388] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.562 ms, result 0
00:23:50.881  true
00:23:50.881   14:34:29 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024
00:23:51.448  [2024-11-20 14:34:30.126304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:51.448  [2024-11-20 14:34:30.126387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Process trim
00:23:51.448  [2024-11-20 14:34:30.126415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.042 ms
00:23:51.448  [2024-11-20 14:34:30.126429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:51.448  [2024-11-20 14:34:30.126487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.236 ms, result 0
00:23:51.448  true
00:23:51.448   14:34:30 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79076
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79076 ']'
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79076
00:23:51.448    14:34:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:51.448    14:34:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79076
00:23:51.448  killing process with pid 79076
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79076'
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79076
00:23:51.448   14:34:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79076
00:23:52.383  [2024-11-20 14:34:31.161116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.161204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:23:52.383  [2024-11-20 14:34:31.161226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:52.383  [2024-11-20 14:34:31.161245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.161280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:23:52.383  [2024-11-20 14:34:31.164616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.164651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:23:52.383  [2024-11-20 14:34:31.164672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.309 ms
00:23:52.383  [2024-11-20 14:34:31.164685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.165026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.165057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:23:52.383  [2024-11-20 14:34:31.165074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.288 ms
00:23:52.383  [2024-11-20 14:34:31.165087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.169151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.169199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:23:52.383  [2024-11-20 14:34:31.169219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.034 ms
00:23:52.383  [2024-11-20 14:34:31.169232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.177313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.177355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:23:52.383  [2024-11-20 14:34:31.177376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.027 ms
00:23:52.383  [2024-11-20 14:34:31.177389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.189980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.190032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:23:52.383  [2024-11-20 14:34:31.190058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.487 ms
00:23:52.383  [2024-11-20 14:34:31.190084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.198528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.198592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:23:52.383  [2024-11-20 14:34:31.198615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 8.358 ms
00:23:52.383  [2024-11-20 14:34:31.198629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.198797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.198818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:23:52.383  [2024-11-20 14:34:31.198835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.098 ms
00:23:52.383  [2024-11-20 14:34:31.198848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.211642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.211686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:23:52.383  [2024-11-20 14:34:31.211706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.763 ms
00:23:52.383  [2024-11-20 14:34:31.211718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.224359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.224400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:23:52.383  [2024-11-20 14:34:31.224430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.581 ms
00:23:52.383  [2024-11-20 14:34:31.224444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.236736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.236779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:23:52.383  [2024-11-20 14:34:31.236806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.233 ms
00:23:52.383  [2024-11-20 14:34:31.236820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.249017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.383  [2024-11-20 14:34:31.249060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:23:52.383  [2024-11-20 14:34:31.249085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 12.101 ms
00:23:52.383  [2024-11-20 14:34:31.249098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.383  [2024-11-20 14:34:31.249155] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:23:52.383  [2024-11-20 14:34:31.249181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.383  [2024-11-20 14:34:31.249505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.249994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.384  [2024-11-20 14:34:31.250854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.385  [2024-11-20 14:34:31.250875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.385  [2024-11-20 14:34:31.250890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.385  [2024-11-20 14:34:31.250909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:23:52.385  [2024-11-20 14:34:31.250933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:23:52.385  [2024-11-20 14:34:31.250956] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:23:52.385  [2024-11-20 14:34:31.250990] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:23:52.385  [2024-11-20 14:34:31.251010] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:23:52.385  [2024-11-20 14:34:31.251023] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:23:52.385  [2024-11-20 14:34:31.251041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:23:52.385  [2024-11-20 14:34:31.251054] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:23:52.385  [2024-11-20 14:34:31.251072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:23:52.385  [2024-11-20 14:34:31.251085] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:23:52.385  [2024-11-20 14:34:31.251101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:23:52.385  [2024-11-20 14:34:31.251113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:23:52.385  [2024-11-20 14:34:31.251131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.385  [2024-11-20 14:34:31.251146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:23:52.385  [2024-11-20 14:34:31.251162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.981 ms
00:23:52.385  [2024-11-20 14:34:31.251182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.385  [2024-11-20 14:34:31.267856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.385  [2024-11-20 14:34:31.267901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:23:52.385  [2024-11-20 14:34:31.267931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.598 ms
00:23:52.385  [2024-11-20 14:34:31.267946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.385  [2024-11-20 14:34:31.268476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:52.385  [2024-11-20 14:34:31.268510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:23:52.385  [2024-11-20 14:34:31.268534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.406 ms
00:23:52.385  [2024-11-20 14:34:31.268548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.385  [2024-11-20 14:34:31.327388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.385  [2024-11-20 14:34:31.327468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:52.385  [2024-11-20 14:34:31.327497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.385  [2024-11-20 14:34:31.327512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.385  [2024-11-20 14:34:31.327680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.385  [2024-11-20 14:34:31.327701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:52.385  [2024-11-20 14:34:31.327741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.385  [2024-11-20 14:34:31.327755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.385  [2024-11-20 14:34:31.327834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.385  [2024-11-20 14:34:31.327854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:52.385  [2024-11-20 14:34:31.327879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.385  [2024-11-20 14:34:31.327893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.385  [2024-11-20 14:34:31.327928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.385  [2024-11-20 14:34:31.327944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:52.385  [2024-11-20 14:34:31.327964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.385  [2024-11-20 14:34:31.327983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.436608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.436695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:52.644  [2024-11-20 14:34:31.436726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.436742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.521453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.521525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:52.644  [2024-11-20 14:34:31.521553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.521567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.521705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.521725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:52.644  [2024-11-20 14:34:31.521743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.521755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.521796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.521810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:52.644  [2024-11-20 14:34:31.521826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.521838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.521972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.522015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:52.644  [2024-11-20 14:34:31.522034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.522047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.522113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.522133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:23:52.644  [2024-11-20 14:34:31.522149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.522162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.522216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.522240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:52.644  [2024-11-20 14:34:31.522259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.522271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.522330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:23:52.644  [2024-11-20 14:34:31.522348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:52.644  [2024-11-20 14:34:31.522363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:23:52.644  [2024-11-20 14:34:31.522376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:52.644  [2024-11-20 14:34:31.522541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.407 ms, result 0
00:23:53.580   14:34:32 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:23:53.580  [2024-11-20 14:34:32.552762] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:23:53.580  [2024-11-20 14:34:32.552926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79140 ]
00:23:53.838  [2024-11-20 14:34:32.726655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:54.096  [2024-11-20 14:34:32.834354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:54.355  [2024-11-20 14:34:33.160008] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:54.355  [2024-11-20 14:34:33.160095] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:23:54.355  [2024-11-20 14:34:33.322474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.355  [2024-11-20 14:34:33.322539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:23:54.355  [2024-11-20 14:34:33.322559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:23:54.355  [2024-11-20 14:34:33.322586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.355  [2024-11-20 14:34:33.325880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.355  [2024-11-20 14:34:33.325926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:23:54.355  [2024-11-20 14:34:33.325942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.260 ms
00:23:54.355  [2024-11-20 14:34:33.325954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.355  [2024-11-20 14:34:33.326105] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:23:54.355  [2024-11-20 14:34:33.327062] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:23:54.355  [2024-11-20 14:34:33.327105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.355  [2024-11-20 14:34:33.327119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:23:54.355  [2024-11-20 14:34:33.327132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.011 ms
00:23:54.355  [2024-11-20 14:34:33.327143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.355  [2024-11-20 14:34:33.328369] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:23:54.615  [2024-11-20 14:34:33.344661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.344724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:23:54.615  [2024-11-20 14:34:33.344744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.291 ms
00:23:54.615  [2024-11-20 14:34:33.344756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.344920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.344943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:23:54.615  [2024-11-20 14:34:33.344956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.034 ms
00:23:54.615  [2024-11-20 14:34:33.344967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.349639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.349694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:23:54.615  [2024-11-20 14:34:33.349711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.607 ms
00:23:54.615  [2024-11-20 14:34:33.349723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.349879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.349902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:23:54.615  [2024-11-20 14:34:33.349915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.073 ms
00:23:54.615  [2024-11-20 14:34:33.349927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.349967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.349988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:23:54.615  [2024-11-20 14:34:33.350001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:23:54.615  [2024-11-20 14:34:33.350012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.350043] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread
00:23:54.615  [2024-11-20 14:34:33.354340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.354379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:23:54.615  [2024-11-20 14:34:33.354396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.306 ms
00:23:54.615  [2024-11-20 14:34:33.354407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.354480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.354498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:23:54.615  [2024-11-20 14:34:33.354511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:23:54.615  [2024-11-20 14:34:33.354522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.354553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:23:54.615  [2024-11-20 14:34:33.354608] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:23:54.615  [2024-11-20 14:34:33.354653] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:23:54.615  [2024-11-20 14:34:33.354673] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:23:54.615  [2024-11-20 14:34:33.354786] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:23:54.615  [2024-11-20 14:34:33.354802] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:23:54.615  [2024-11-20 14:34:33.354816] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:23:54.615  [2024-11-20 14:34:33.354845] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:23:54.615  [2024-11-20 14:34:33.354863] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:23:54.615  [2024-11-20 14:34:33.354876] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    23592960
00:23:54.615  [2024-11-20 14:34:33.354887] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:23:54.615  [2024-11-20 14:34:33.354897] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:23:54.615  [2024-11-20 14:34:33.354907] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:23:54.615  [2024-11-20 14:34:33.354919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.354930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:23:54.615  [2024-11-20 14:34:33.354942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.370 ms
00:23:54.615  [2024-11-20 14:34:33.354953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.355085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.615  [2024-11-20 14:34:33.355108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:23:54.615  [2024-11-20 14:34:33.355120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:23:54.615  [2024-11-20 14:34:33.355131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.615  [2024-11-20 14:34:33.355252] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:23:54.615  [2024-11-20 14:34:33.355272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:23:54.615  [2024-11-20 14:34:33.355284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:54.615  [2024-11-20 14:34:33.355296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:54.615  [2024-11-20 14:34:33.355308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:23:54.615  [2024-11-20 14:34:33.355318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:23:54.615  [2024-11-20 14:34:33.355328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      90.00 MiB
00:23:54.615  [2024-11-20 14:34:33.355339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:23:54.615  [2024-11-20 14:34:33.355349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.12 MiB
00:23:54.615  [2024-11-20 14:34:33.355359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:54.615  [2024-11-20 14:34:33.355370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:23:54.615  [2024-11-20 14:34:33.355380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      90.62 MiB
00:23:54.615  [2024-11-20 14:34:33.355389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:23:54.615  [2024-11-20 14:34:33.355424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:23:54.615  [2024-11-20 14:34:33.355437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.88 MiB
00:23:54.615  [2024-11-20 14:34:33.355447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:54.615  [2024-11-20 14:34:33.355458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:23:54.615  [2024-11-20 14:34:33.355469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      124.00 MiB
00:23:54.615  [2024-11-20 14:34:33.355480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:54.615  [2024-11-20 14:34:33.355490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:23:54.615  [2024-11-20 14:34:33.355500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      91.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:54.616  [2024-11-20 14:34:33.355520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:23:54.616  [2024-11-20 14:34:33.355531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      99.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:54.616  [2024-11-20 14:34:33.355551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:23:54.616  [2024-11-20 14:34:33.355561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      107.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:54.616  [2024-11-20 14:34:33.355601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:23:54.616  [2024-11-20 14:34:33.355611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      115.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:23:54.616  [2024-11-20 14:34:33.355632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:23:54.616  [2024-11-20 14:34:33.355642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:54.616  [2024-11-20 14:34:33.355662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:23:54.616  [2024-11-20 14:34:33.355672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.38 MiB
00:23:54.616  [2024-11-20 14:34:33.355682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:23:54.616  [2024-11-20 14:34:33.355691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:23:54.616  [2024-11-20 14:34:33.355701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.62 MiB
00:23:54.616  [2024-11-20 14:34:33.355711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:23:54.616  [2024-11-20 14:34:33.355732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      123.75 MiB
00:23:54.616  [2024-11-20 14:34:33.355742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:23:54.616  [2024-11-20 14:34:33.355773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:23:54.616  [2024-11-20 14:34:33.355797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:23:54.616  [2024-11-20 14:34:33.355812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:23:54.616  [2024-11-20 14:34:33.355823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:23:54.616  [2024-11-20 14:34:33.355834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:23:54.616  [2024-11-20 14:34:33.355844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:23:54.616  [2024-11-20 14:34:33.355855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:23:54.616  [2024-11-20 14:34:33.355865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:23:54.616  [2024-11-20 14:34:33.355875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:23:54.616  [2024-11-20 14:34:33.355886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:23:54.616  [2024-11-20 14:34:33.355900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.355912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00
00:23:54.616  [2024-11-20 14:34:33.355923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80
00:23:54.616  [2024-11-20 14:34:33.355934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80
00:23:54.616  [2024-11-20 14:34:33.355945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800
00:23:54.616  [2024-11-20 14:34:33.355955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800
00:23:54.616  [2024-11-20 14:34:33.355966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800
00:23:54.616  [2024-11-20 14:34:33.355977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800
00:23:54.616  [2024-11-20 14:34:33.355987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40
00:23:54.616  [2024-11-20 14:34:33.355998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40
00:23:54.616  [2024-11-20 14:34:33.356008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.356019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.356030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.356041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.356052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0
00:23:54.616  [2024-11-20 14:34:33.356062] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:23:54.616  [2024-11-20 14:34:33.356074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.356086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:23:54.616  [2024-11-20 14:34:33.356098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:23:54.616  [2024-11-20 14:34:33.356108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:23:54.616  [2024-11-20 14:34:33.356119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:23:54.616  [2024-11-20 14:34:33.356131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.356142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:23:54.616  [2024-11-20 14:34:33.356158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.950 ms
00:23:54.616  [2024-11-20 14:34:33.356169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.616  [2024-11-20 14:34:33.389083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.389144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:23:54.616  [2024-11-20 14:34:33.389165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.837 ms
00:23:54.616  [2024-11-20 14:34:33.389178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.616  [2024-11-20 14:34:33.389370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.389398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:23:54.616  [2024-11-20 14:34:33.389411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.067 ms
00:23:54.616  [2024-11-20 14:34:33.389422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.616  [2024-11-20 14:34:33.447160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.447224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:23:54.616  [2024-11-20 14:34:33.447244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 57.703 ms
00:23:54.616  [2024-11-20 14:34:33.447261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.616  [2024-11-20 14:34:33.447438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.447465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:23:54.616  [2024-11-20 14:34:33.447478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:23:54.616  [2024-11-20 14:34:33.447489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.616  [2024-11-20 14:34:33.447837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.447868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:23:54.616  [2024-11-20 14:34:33.447882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.315 ms
00:23:54.616  [2024-11-20 14:34:33.447902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.616  [2024-11-20 14:34:33.448062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.616  [2024-11-20 14:34:33.448091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:23:54.617  [2024-11-20 14:34:33.448105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.126 ms
00:23:54.617  [2024-11-20 14:34:33.448116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.617  [2024-11-20 14:34:33.465677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.617  [2024-11-20 14:34:33.465741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:23:54.617  [2024-11-20 14:34:33.465762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.528 ms
00:23:54.617  [2024-11-20 14:34:33.465774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.617  [2024-11-20 14:34:33.482675] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:23:54.617  [2024-11-20 14:34:33.482734] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:23:54.617  [2024-11-20 14:34:33.482756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.617  [2024-11-20 14:34:33.482769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:23:54.617  [2024-11-20 14:34:33.482784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.791 ms
00:23:54.617  [2024-11-20 14:34:33.482796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.617  [2024-11-20 14:34:33.518558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.617  [2024-11-20 14:34:33.518672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:23:54.617  [2024-11-20 14:34:33.518694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 35.471 ms
00:23:54.617  [2024-11-20 14:34:33.518706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.617  [2024-11-20 14:34:33.534862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.617  [2024-11-20 14:34:33.534918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:23:54.617  [2024-11-20 14:34:33.534937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.002 ms
00:23:54.617  [2024-11-20 14:34:33.534949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.617  [2024-11-20 14:34:33.550584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.617  [2024-11-20 14:34:33.550629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:23:54.617  [2024-11-20 14:34:33.550648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.476 ms
00:23:54.617  [2024-11-20 14:34:33.550659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.617  [2024-11-20 14:34:33.551888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.617  [2024-11-20 14:34:33.551938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:23:54.617  [2024-11-20 14:34:33.551957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.880 ms
00:23:54.617  [2024-11-20 14:34:33.551969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.626789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.626868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:23:54.876  [2024-11-20 14:34:33.626889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 74.779 ms
00:23:54.876  [2024-11-20 14:34:33.626902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.639813] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB
00:23:54.876  [2024-11-20 14:34:33.654010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.654083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:23:54.876  [2024-11-20 14:34:33.654104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 26.931 ms
00:23:54.876  [2024-11-20 14:34:33.654126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.654302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.654324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:23:54.876  [2024-11-20 14:34:33.654338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:23:54.876  [2024-11-20 14:34:33.654349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.654420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.654438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:23:54.876  [2024-11-20 14:34:33.654450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.040 ms
00:23:54.876  [2024-11-20 14:34:33.654462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.654515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.654534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:23:54.876  [2024-11-20 14:34:33.654546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:23:54.876  [2024-11-20 14:34:33.654557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.654628] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:23:54.876  [2024-11-20 14:34:33.654649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.654660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:23:54.876  [2024-11-20 14:34:33.654673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:23:54.876  [2024-11-20 14:34:33.654684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.685985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.686047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:23:54.876  [2024-11-20 14:34:33.686067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.271 ms
00:23:54.876  [2024-11-20 14:34:33.686079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.686239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:23:54.876  [2024-11-20 14:34:33.686261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:23:54.876  [2024-11-20 14:34:33.686274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.050 ms
00:23:54.876  [2024-11-20 14:34:33.686285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:23:54.876  [2024-11-20 14:34:33.687262] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:54.876  [2024-11-20 14:34:33.691486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 364.469 ms, result 0
00:23:54.876  [2024-11-20 14:34:33.692280] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:23:54.876  [2024-11-20 14:34:33.708741] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:23:55.809  
[2024-11-20T14:34:36.163Z] Copying: 28/256 [MB] (28 MBps)
[2024-11-20T14:34:37.106Z] Copying: 52/256 [MB] (23 MBps)
[2024-11-20T14:34:38.042Z] Copying: 77/256 [MB] (25 MBps)
[2024-11-20T14:34:38.977Z] Copying: 103/256 [MB] (25 MBps)
[2024-11-20T14:34:39.911Z] Copying: 126/256 [MB] (22 MBps)
[2024-11-20T14:34:40.844Z] Copying: 150/256 [MB] (24 MBps)
[2024-11-20T14:34:41.778Z] Copying: 177/256 [MB] (26 MBps)
[2024-11-20T14:34:42.799Z] Copying: 203/256 [MB] (26 MBps)
[2024-11-20T14:34:44.173Z] Copying: 228/256 [MB] (24 MBps)
[2024-11-20T14:34:44.173Z] Copying: 252/256 [MB] (24 MBps)
[2024-11-20T14:34:44.431Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-20 14:34:44.249947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:05.449  [2024-11-20 14:34:44.267024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.449  [2024-11-20 14:34:44.267102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:05.449  [2024-11-20 14:34:44.267136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:24:05.449  [2024-11-20 14:34:44.267163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.449  [2024-11-20 14:34:44.267201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread
00:24:05.449  [2024-11-20 14:34:44.271819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.449  [2024-11-20 14:34:44.271898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:05.449  [2024-11-20 14:34:44.271934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.587 ms
00:24:05.449  [2024-11-20 14:34:44.271959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.449  [2024-11-20 14:34:44.272405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.272467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:05.450  [2024-11-20 14:34:44.272498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.374 ms
00:24:05.450  [2024-11-20 14:34:44.272522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.277008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.277105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:05.450  [2024-11-20 14:34:44.277139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.439 ms
00:24:05.450  [2024-11-20 14:34:44.277163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.286829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.286929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:05.450  [2024-11-20 14:34:44.286950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.556 ms
00:24:05.450  [2024-11-20 14:34:44.286964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.332133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.332250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:05.450  [2024-11-20 14:34:44.332272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 45.055 ms
00:24:05.450  [2024-11-20 14:34:44.332284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.351451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.351583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:05.450  [2024-11-20 14:34:44.351614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 19.050 ms
00:24:05.450  [2024-11-20 14:34:44.351627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.351964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.352025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:05.450  [2024-11-20 14:34:44.352056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.138 ms
00:24:05.450  [2024-11-20 14:34:44.352077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.385391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.385482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:05.450  [2024-11-20 14:34:44.385504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.245 ms
00:24:05.450  [2024-11-20 14:34:44.385516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.450  [2024-11-20 14:34:44.418314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.450  [2024-11-20 14:34:44.418406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:05.450  [2024-11-20 14:34:44.418427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.684 ms
00:24:05.450  [2024-11-20 14:34:44.418438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.709  [2024-11-20 14:34:44.451098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.709  [2024-11-20 14:34:44.451193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:05.709  [2024-11-20 14:34:44.451216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.558 ms
00:24:05.709  [2024-11-20 14:34:44.451228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.709  [2024-11-20 14:34:44.484379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.709  [2024-11-20 14:34:44.484483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:05.709  [2024-11-20 14:34:44.484504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.000 ms
00:24:05.709  [2024-11-20 14:34:44.484516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.709  [2024-11-20 14:34:44.484633] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:05.709  [2024-11-20 14:34:44.484660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.484995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.485010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.485029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.709  [2024-11-20 14:34:44.485048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.485987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.710  [2024-11-20 14:34:44.486534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:05.711  [2024-11-20 14:34:44.486803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:05.711  [2024-11-20 14:34:44.486818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         4beb1ddc-3b05-4a56-8819-65e82b329bd5
00:24:05.711  [2024-11-20 14:34:44.486840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:05.711  [2024-11-20 14:34:44.486861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:05.711  [2024-11-20 14:34:44.486883] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:05.711  [2024-11-20 14:34:44.486905] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:05.711  [2024-11-20 14:34:44.486929] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:05.711  [2024-11-20 14:34:44.486953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:05.711  [2024-11-20 14:34:44.486971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:05.711  [2024-11-20 14:34:44.486981] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:05.711  [2024-11-20 14:34:44.486992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:05.711  [2024-11-20 14:34:44.487013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.711  [2024-11-20 14:34:44.487047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:05.711  [2024-11-20 14:34:44.487081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.382 ms
00:24:05.711  [2024-11-20 14:34:44.487104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.505276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.711  [2024-11-20 14:34:44.505355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:05.711  [2024-11-20 14:34:44.505377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.110 ms
00:24:05.711  [2024-11-20 14:34:44.505396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.506055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:05.711  [2024-11-20 14:34:44.506098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:05.711  [2024-11-20 14:34:44.506114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.564 ms
00:24:05.711  [2024-11-20 14:34:44.506126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.555215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.711  [2024-11-20 14:34:44.555310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:05.711  [2024-11-20 14:34:44.555332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.711  [2024-11-20 14:34:44.555344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.555590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.711  [2024-11-20 14:34:44.555625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:05.711  [2024-11-20 14:34:44.555654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.711  [2024-11-20 14:34:44.555678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.555841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.711  [2024-11-20 14:34:44.555892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:05.711  [2024-11-20 14:34:44.555921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.711  [2024-11-20 14:34:44.555944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.555982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.711  [2024-11-20 14:34:44.556017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:05.711  [2024-11-20 14:34:44.556042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.711  [2024-11-20 14:34:44.556061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.711  [2024-11-20 14:34:44.663813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.711  [2024-11-20 14:34:44.663916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:05.711  [2024-11-20 14:34:44.663939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.711  [2024-11-20 14:34:44.663951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.751903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.751982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:05.970  [2024-11-20 14:34:44.752003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.752015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.752111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.752130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:05.970  [2024-11-20 14:34:44.752142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.752153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.752189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.752203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:05.970  [2024-11-20 14:34:44.752234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.752245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.752406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.752459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:05.970  [2024-11-20 14:34:44.752485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.752505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.752615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.752651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:05.970  [2024-11-20 14:34:44.752675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.752699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.752773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.752809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:05.970  [2024-11-20 14:34:44.752831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.752852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.752937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:05.970  [2024-11-20 14:34:44.752975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:05.970  [2024-11-20 14:34:44.753008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:05.970  [2024-11-20 14:34:44.753029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:05.970  [2024-11-20 14:34:44.753294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.285 ms, result 0
00:24:06.905  
00:24:06.905  
00:24:06.906   14:34:45 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:24:07.472  /home/vagrant/spdk_repo/spdk/test/ftl/data: OK
00:24:07.472   14:34:46 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT
00:24:07.472   14:34:46 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill
00:24:07.472   14:34:46 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:24:07.472   14:34:46 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:24:07.472   14:34:46 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern
00:24:07.472   14:34:46 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data
00:24:07.731  Process with pid 79076 is not found
00:24:07.731   14:34:46 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79076
00:24:07.731   14:34:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79076 ']'
00:24:07.731   14:34:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79076
00:24:07.731  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79076) - No such process
00:24:07.731   14:34:46 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79076 is not found'
00:24:07.731  ************************************
00:24:07.731  END TEST ftl_trim
00:24:07.731  ************************************
00:24:07.731  
00:24:07.731  real	1m9.639s
00:24:07.731  user	1m37.178s
00:24:07.731  sys	0m7.084s
00:24:07.731   14:34:46 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:07.731   14:34:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x
00:24:07.731   14:34:46 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0
00:24:07.731   14:34:46 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:24:07.731   14:34:46 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:07.731   14:34:46 ftl -- common/autotest_common.sh@10 -- # set +x
00:24:07.731  ************************************
00:24:07.731  START TEST ftl_restore
00:24:07.731  ************************************
00:24:07.731   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0
00:24:07.731  * Looking for test storage...
00:24:07.731  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:24:07.731     14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version
00:24:07.731     14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-:
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-:
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<'
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:07.731     14:34:46 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:07.731    14:34:46 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:24:07.731  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:07.731  		--rc genhtml_branch_coverage=1
00:24:07.731  		--rc genhtml_function_coverage=1
00:24:07.731  		--rc genhtml_legend=1
00:24:07.731  		--rc geninfo_all_blocks=1
00:24:07.731  		--rc geninfo_unexecuted_blocks=1
00:24:07.731  		
00:24:07.731  		'
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:24:07.731  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:07.731  		--rc genhtml_branch_coverage=1
00:24:07.731  		--rc genhtml_function_coverage=1
00:24:07.731  		--rc genhtml_legend=1
00:24:07.731  		--rc geninfo_all_blocks=1
00:24:07.731  		--rc geninfo_unexecuted_blocks=1
00:24:07.731  		
00:24:07.731  		'
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:24:07.731  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:07.731  		--rc genhtml_branch_coverage=1
00:24:07.731  		--rc genhtml_function_coverage=1
00:24:07.731  		--rc genhtml_legend=1
00:24:07.731  		--rc geninfo_all_blocks=1
00:24:07.731  		--rc geninfo_unexecuted_blocks=1
00:24:07.731  		
00:24:07.731  		'
00:24:07.731    14:34:46 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:24:07.731  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:07.731  		--rc genhtml_branch_coverage=1
00:24:07.731  		--rc genhtml_function_coverage=1
00:24:07.731  		--rc genhtml_legend=1
00:24:07.731  		--rc geninfo_all_blocks=1
00:24:07.731  		--rc geninfo_unexecuted_blocks=1
00:24:07.731  		
00:24:07.731  		'
00:24:07.731   14:34:46 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:24:07.731      14:34:46 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh
00:24:07.731     14:34:46 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:24:07.731     14:34:46 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid=
00:24:07.731    14:34:46 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:24:07.732    14:34:46 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:24:07.732    14:34:46 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.sjZcpi8SK3
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79346
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79346
00:24:07.732   14:34:46 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:24:07.732   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79346 ']'
00:24:07.732   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:07.732   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:07.732  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:07.732   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:07.732   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:07.732   14:34:46 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x
00:24:08.009  [2024-11-20 14:34:46.794960] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:24:08.009  [2024-11-20 14:34:46.795665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79346 ]
00:24:08.009  [2024-11-20 14:34:46.966653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:08.268  [2024-11-20 14:34:47.092157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:09.205   14:34:47 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:09.205   14:34:47 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0
00:24:09.205    14:34:47 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:24:09.205    14:34:47 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0
00:24:09.205    14:34:47 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:24:09.205    14:34:47 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424
00:24:09.205    14:34:47 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev
00:24:09.205     14:34:47 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:24:09.464    14:34:48 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:24:09.464    14:34:48 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size
00:24:09.464     14:34:48 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:24:09.464     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:24:09.464     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:09.464     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:09.464     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:09.464      14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:24:09.724     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:09.724    {
00:24:09.724      "name": "nvme0n1",
00:24:09.724      "aliases": [
00:24:09.724        "74af9069-e39f-4e8a-8b06-5c651ce85f94"
00:24:09.724      ],
00:24:09.724      "product_name": "NVMe disk",
00:24:09.724      "block_size": 4096,
00:24:09.724      "num_blocks": 1310720,
00:24:09.724      "uuid": "74af9069-e39f-4e8a-8b06-5c651ce85f94",
00:24:09.724      "numa_id": -1,
00:24:09.724      "assigned_rate_limits": {
00:24:09.724        "rw_ios_per_sec": 0,
00:24:09.724        "rw_mbytes_per_sec": 0,
00:24:09.724        "r_mbytes_per_sec": 0,
00:24:09.724        "w_mbytes_per_sec": 0
00:24:09.724      },
00:24:09.724      "claimed": true,
00:24:09.724      "claim_type": "read_many_write_one",
00:24:09.724      "zoned": false,
00:24:09.724      "supported_io_types": {
00:24:09.724        "read": true,
00:24:09.724        "write": true,
00:24:09.724        "unmap": true,
00:24:09.724        "flush": true,
00:24:09.724        "reset": true,
00:24:09.724        "nvme_admin": true,
00:24:09.724        "nvme_io": true,
00:24:09.724        "nvme_io_md": false,
00:24:09.724        "write_zeroes": true,
00:24:09.724        "zcopy": false,
00:24:09.724        "get_zone_info": false,
00:24:09.724        "zone_management": false,
00:24:09.724        "zone_append": false,
00:24:09.724        "compare": true,
00:24:09.724        "compare_and_write": false,
00:24:09.724        "abort": true,
00:24:09.724        "seek_hole": false,
00:24:09.724        "seek_data": false,
00:24:09.724        "copy": true,
00:24:09.724        "nvme_iov_md": false
00:24:09.724      },
00:24:09.724      "driver_specific": {
00:24:09.724        "nvme": [
00:24:09.724          {
00:24:09.724            "pci_address": "0000:00:11.0",
00:24:09.724            "trid": {
00:24:09.724              "trtype": "PCIe",
00:24:09.724              "traddr": "0000:00:11.0"
00:24:09.724            },
00:24:09.724            "ctrlr_data": {
00:24:09.724              "cntlid": 0,
00:24:09.724              "vendor_id": "0x1b36",
00:24:09.724              "model_number": "QEMU NVMe Ctrl",
00:24:09.724              "serial_number": "12341",
00:24:09.724              "firmware_revision": "8.0.0",
00:24:09.724              "subnqn": "nqn.2019-08.org.qemu:12341",
00:24:09.724              "oacs": {
00:24:09.724                "security": 0,
00:24:09.724                "format": 1,
00:24:09.724                "firmware": 0,
00:24:09.724                "ns_manage": 1
00:24:09.724              },
00:24:09.724              "multi_ctrlr": false,
00:24:09.724              "ana_reporting": false
00:24:09.724            },
00:24:09.724            "vs": {
00:24:09.724              "nvme_version": "1.4"
00:24:09.724            },
00:24:09.724            "ns_data": {
00:24:09.724              "id": 1,
00:24:09.724              "can_share": false
00:24:09.724            }
00:24:09.724          }
00:24:09.724        ],
00:24:09.724        "mp_policy": "active_passive"
00:24:09.724      }
00:24:09.724    }
00:24:09.724  ]'
00:24:09.724      14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:09.724     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:09.724      14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:09.983     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720
00:24:09.983     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:24:09.983     14:34:48 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120
00:24:09.983    14:34:48 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120
00:24:09.983    14:34:48 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:24:09.983    14:34:48 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols
00:24:09.983     14:34:48 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:24:09.983     14:34:48 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:24:10.242    14:34:48 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=f2ba17cc-3772-41b8-9c85-8fa3ad8de82f
00:24:10.242    14:34:48 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores
00:24:10.242    14:34:48 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2ba17cc-3772-41b8-9c85-8fa3ad8de82f
00:24:10.501     14:34:49 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:24:10.759    14:34:49 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=2edefd5e-867e-4362-bcac-047aadb98384
00:24:10.759    14:34:49 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2edefd5e-867e-4362-bcac-047aadb98384
00:24:11.017   14:34:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.017   14:34:49 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']'
00:24:11.017    14:34:49 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.017    14:34:49 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0
00:24:11.017    14:34:49 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:24:11.017    14:34:49 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.018    14:34:49 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size=
00:24:11.018     14:34:49 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.018     14:34:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.018     14:34:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:11.018     14:34:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:11.018     14:34:49 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:11.018      14:34:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.582     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:11.582    {
00:24:11.582      "name": "da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0",
00:24:11.582      "aliases": [
00:24:11.582        "lvs/nvme0n1p0"
00:24:11.582      ],
00:24:11.582      "product_name": "Logical Volume",
00:24:11.582      "block_size": 4096,
00:24:11.582      "num_blocks": 26476544,
00:24:11.582      "uuid": "da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0",
00:24:11.582      "assigned_rate_limits": {
00:24:11.582        "rw_ios_per_sec": 0,
00:24:11.582        "rw_mbytes_per_sec": 0,
00:24:11.582        "r_mbytes_per_sec": 0,
00:24:11.582        "w_mbytes_per_sec": 0
00:24:11.582      },
00:24:11.582      "claimed": false,
00:24:11.582      "zoned": false,
00:24:11.582      "supported_io_types": {
00:24:11.582        "read": true,
00:24:11.582        "write": true,
00:24:11.582        "unmap": true,
00:24:11.582        "flush": false,
00:24:11.582        "reset": true,
00:24:11.582        "nvme_admin": false,
00:24:11.582        "nvme_io": false,
00:24:11.582        "nvme_io_md": false,
00:24:11.582        "write_zeroes": true,
00:24:11.582        "zcopy": false,
00:24:11.582        "get_zone_info": false,
00:24:11.582        "zone_management": false,
00:24:11.582        "zone_append": false,
00:24:11.582        "compare": false,
00:24:11.582        "compare_and_write": false,
00:24:11.582        "abort": false,
00:24:11.582        "seek_hole": true,
00:24:11.582        "seek_data": true,
00:24:11.582        "copy": false,
00:24:11.582        "nvme_iov_md": false
00:24:11.582      },
00:24:11.582      "driver_specific": {
00:24:11.582        "lvol": {
00:24:11.582          "lvol_store_uuid": "2edefd5e-867e-4362-bcac-047aadb98384",
00:24:11.582          "base_bdev": "nvme0n1",
00:24:11.583          "thin_provision": true,
00:24:11.583          "num_allocated_clusters": 0,
00:24:11.583          "snapshot": false,
00:24:11.583          "clone": false,
00:24:11.583          "esnap_clone": false
00:24:11.583        }
00:24:11.583      }
00:24:11.583    }
00:24:11.583  ]'
00:24:11.583      14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:11.583     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:11.583      14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:11.583     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:24:11.583     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:24:11.583     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:24:11.583    14:34:50 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171
00:24:11.583    14:34:50 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev
00:24:11.583     14:34:50 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:24:11.840    14:34:50 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:24:11.840    14:34:50 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]]
00:24:11.840     14:34:50 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.840     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:11.840     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:11.840     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:11.840     14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:11.840      14:34:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:12.406     14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:12.406    {
00:24:12.406      "name": "da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0",
00:24:12.406      "aliases": [
00:24:12.406        "lvs/nvme0n1p0"
00:24:12.406      ],
00:24:12.406      "product_name": "Logical Volume",
00:24:12.406      "block_size": 4096,
00:24:12.406      "num_blocks": 26476544,
00:24:12.406      "uuid": "da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0",
00:24:12.406      "assigned_rate_limits": {
00:24:12.406        "rw_ios_per_sec": 0,
00:24:12.406        "rw_mbytes_per_sec": 0,
00:24:12.406        "r_mbytes_per_sec": 0,
00:24:12.406        "w_mbytes_per_sec": 0
00:24:12.406      },
00:24:12.406      "claimed": false,
00:24:12.406      "zoned": false,
00:24:12.406      "supported_io_types": {
00:24:12.406        "read": true,
00:24:12.406        "write": true,
00:24:12.406        "unmap": true,
00:24:12.406        "flush": false,
00:24:12.406        "reset": true,
00:24:12.406        "nvme_admin": false,
00:24:12.406        "nvme_io": false,
00:24:12.406        "nvme_io_md": false,
00:24:12.406        "write_zeroes": true,
00:24:12.406        "zcopy": false,
00:24:12.406        "get_zone_info": false,
00:24:12.406        "zone_management": false,
00:24:12.406        "zone_append": false,
00:24:12.406        "compare": false,
00:24:12.406        "compare_and_write": false,
00:24:12.406        "abort": false,
00:24:12.406        "seek_hole": true,
00:24:12.406        "seek_data": true,
00:24:12.406        "copy": false,
00:24:12.406        "nvme_iov_md": false
00:24:12.406      },
00:24:12.406      "driver_specific": {
00:24:12.406        "lvol": {
00:24:12.406          "lvol_store_uuid": "2edefd5e-867e-4362-bcac-047aadb98384",
00:24:12.406          "base_bdev": "nvme0n1",
00:24:12.406          "thin_provision": true,
00:24:12.406          "num_allocated_clusters": 0,
00:24:12.406          "snapshot": false,
00:24:12.406          "clone": false,
00:24:12.406          "esnap_clone": false
00:24:12.406        }
00:24:12.406      }
00:24:12.406    }
00:24:12.406  ]'
00:24:12.406      14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:12.406     14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:12.406      14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:12.406     14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:24:12.406     14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:24:12.406     14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:24:12.406    14:34:51 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171
00:24:12.406    14:34:51 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:24:12.972   14:34:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0
00:24:12.972    14:34:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:12.972    14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:12.972    14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info
00:24:12.972    14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs
00:24:12.972    14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb
00:24:12.972     14:34:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0
00:24:13.229    14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[
00:24:13.229    {
00:24:13.229      "name": "da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0",
00:24:13.229      "aliases": [
00:24:13.229        "lvs/nvme0n1p0"
00:24:13.229      ],
00:24:13.229      "product_name": "Logical Volume",
00:24:13.229      "block_size": 4096,
00:24:13.229      "num_blocks": 26476544,
00:24:13.229      "uuid": "da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0",
00:24:13.229      "assigned_rate_limits": {
00:24:13.229        "rw_ios_per_sec": 0,
00:24:13.229        "rw_mbytes_per_sec": 0,
00:24:13.229        "r_mbytes_per_sec": 0,
00:24:13.229        "w_mbytes_per_sec": 0
00:24:13.229      },
00:24:13.229      "claimed": false,
00:24:13.229      "zoned": false,
00:24:13.229      "supported_io_types": {
00:24:13.229        "read": true,
00:24:13.229        "write": true,
00:24:13.229        "unmap": true,
00:24:13.229        "flush": false,
00:24:13.229        "reset": true,
00:24:13.229        "nvme_admin": false,
00:24:13.229        "nvme_io": false,
00:24:13.229        "nvme_io_md": false,
00:24:13.229        "write_zeroes": true,
00:24:13.229        "zcopy": false,
00:24:13.229        "get_zone_info": false,
00:24:13.229        "zone_management": false,
00:24:13.229        "zone_append": false,
00:24:13.229        "compare": false,
00:24:13.229        "compare_and_write": false,
00:24:13.229        "abort": false,
00:24:13.229        "seek_hole": true,
00:24:13.229        "seek_data": true,
00:24:13.229        "copy": false,
00:24:13.229        "nvme_iov_md": false
00:24:13.230      },
00:24:13.230      "driver_specific": {
00:24:13.230        "lvol": {
00:24:13.230          "lvol_store_uuid": "2edefd5e-867e-4362-bcac-047aadb98384",
00:24:13.230          "base_bdev": "nvme0n1",
00:24:13.230          "thin_provision": true,
00:24:13.230          "num_allocated_clusters": 0,
00:24:13.230          "snapshot": false,
00:24:13.230          "clone": false,
00:24:13.230          "esnap_clone": false
00:24:13.230        }
00:24:13.230      }
00:24:13.230    }
00:24:13.230  ]'
00:24:13.230     14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:24:13.230    14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096
00:24:13.230     14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:24:13.230    14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544
00:24:13.230    14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:24:13.230    14:34:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0 --l2p_dram_limit 10'
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']'
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']'
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0'
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']'
00:24:13.230  /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected
00:24:13.230   14:34:52 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d da09d7d5-ff50-4ca1-b8a6-b6c2f498c6f0 --l2p_dram_limit 10 -c nvc0n1p0
00:24:13.488  [2024-11-20 14:34:52.411976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.412044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:13.488  [2024-11-20 14:34:52.412072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:13.488  [2024-11-20 14:34:52.412085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.412178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.412200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:13.488  [2024-11-20 14:34:52.412217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.063 ms
00:24:13.488  [2024-11-20 14:34:52.412230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.412264] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:13.488  [2024-11-20 14:34:52.413268] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:13.488  [2024-11-20 14:34:52.413310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.413326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:13.488  [2024-11-20 14:34:52.413341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.050 ms
00:24:13.488  [2024-11-20 14:34:52.413353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.413543] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cd5e6f69-30c0-44af-9535-aa51982d8157
00:24:13.488  [2024-11-20 14:34:52.414645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.414688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:24:13.488  [2024-11-20 14:34:52.414706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.025 ms
00:24:13.488  [2024-11-20 14:34:52.414720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.419476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.419534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:13.488  [2024-11-20 14:34:52.419550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.695 ms
00:24:13.488  [2024-11-20 14:34:52.419565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.419723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.419748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:13.488  [2024-11-20 14:34:52.419762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.092 ms
00:24:13.488  [2024-11-20 14:34:52.419782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.419889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.419912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:13.488  [2024-11-20 14:34:52.419925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:24:13.488  [2024-11-20 14:34:52.419942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.419976] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:13.488  [2024-11-20 14:34:52.424561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.424620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:13.488  [2024-11-20 14:34:52.424640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.591 ms
00:24:13.488  [2024-11-20 14:34:52.424653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.424711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.488  [2024-11-20 14:34:52.424727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:13.488  [2024-11-20 14:34:52.424742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:24:13.488  [2024-11-20 14:34:52.424755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.488  [2024-11-20 14:34:52.424818] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:24:13.488  [2024-11-20 14:34:52.424994] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:13.488  [2024-11-20 14:34:52.425020] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:13.489  [2024-11-20 14:34:52.425038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:13.489  [2024-11-20 14:34:52.425056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425070] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425085] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:24:13.489  [2024-11-20 14:34:52.425096] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:13.489  [2024-11-20 14:34:52.425113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:13.489  [2024-11-20 14:34:52.425124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:13.489  [2024-11-20 14:34:52.425139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.489  [2024-11-20 14:34:52.425151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:13.489  [2024-11-20 14:34:52.425165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.326 ms
00:24:13.489  [2024-11-20 14:34:52.425195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.489  [2024-11-20 14:34:52.425298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.489  [2024-11-20 14:34:52.425314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:13.489  [2024-11-20 14:34:52.425330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.070 ms
00:24:13.489  [2024-11-20 14:34:52.425341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.489  [2024-11-20 14:34:52.425478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:13.489  [2024-11-20 14:34:52.425498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:13.489  [2024-11-20 14:34:52.425514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:13.489  [2024-11-20 14:34:52.425554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:13.489  [2024-11-20 14:34:52.425615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:13.489  [2024-11-20 14:34:52.425639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:13.489  [2024-11-20 14:34:52.425652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:24:13.489  [2024-11-20 14:34:52.425665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:13.489  [2024-11-20 14:34:52.425677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:13.489  [2024-11-20 14:34:52.425691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:24:13.489  [2024-11-20 14:34:52.425701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:13.489  [2024-11-20 14:34:52.425731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:13.489  [2024-11-20 14:34:52.425769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:13.489  [2024-11-20 14:34:52.425806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:13.489  [2024-11-20 14:34:52.425844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:13.489  [2024-11-20 14:34:52.425880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:13.489  [2024-11-20 14:34:52.425904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:13.489  [2024-11-20 14:34:52.425920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:24:13.489  [2024-11-20 14:34:52.425931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:13.489  [2024-11-20 14:34:52.425944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:13.489  [2024-11-20 14:34:52.425955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:24:13.489  [2024-11-20 14:34:52.425968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:13.489  [2024-11-20 14:34:52.425979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:13.489  [2024-11-20 14:34:52.425993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:24:13.489  [2024-11-20 14:34:52.426004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.426017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:13.489  [2024-11-20 14:34:52.426028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:24:13.489  [2024-11-20 14:34:52.426041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.426052] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:13.489  [2024-11-20 14:34:52.426069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:13.489  [2024-11-20 14:34:52.426081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:13.489  [2024-11-20 14:34:52.426095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:13.489  [2024-11-20 14:34:52.426108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:13.489  [2024-11-20 14:34:52.426125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:13.489  [2024-11-20 14:34:52.426136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:13.489  [2024-11-20 14:34:52.426149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:13.489  [2024-11-20 14:34:52.426160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:13.489  [2024-11-20 14:34:52.426174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:13.489  [2024-11-20 14:34:52.426191] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:13.489  [2024-11-20 14:34:52.426209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:24:13.489  [2024-11-20 14:34:52.426239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:24:13.489  [2024-11-20 14:34:52.426252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:24:13.489  [2024-11-20 14:34:52.426266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:24:13.489  [2024-11-20 14:34:52.426278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:24:13.489  [2024-11-20 14:34:52.426291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:24:13.489  [2024-11-20 14:34:52.426303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:24:13.489  [2024-11-20 14:34:52.426317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:24:13.489  [2024-11-20 14:34:52.426329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:24:13.489  [2024-11-20 14:34:52.426344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:24:13.489  [2024-11-20 14:34:52.426409] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:13.489  [2024-11-20 14:34:52.426424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:13.489  [2024-11-20 14:34:52.426451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:13.489  [2024-11-20 14:34:52.426463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:13.489  [2024-11-20 14:34:52.426477] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:13.489  [2024-11-20 14:34:52.426490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:13.489  [2024-11-20 14:34:52.426505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:13.489  [2024-11-20 14:34:52.426517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.097 ms
00:24:13.489  [2024-11-20 14:34:52.426531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:13.489  [2024-11-20 14:34:52.426599] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:24:13.489  [2024-11-20 14:34:52.426624] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:24:16.016  [2024-11-20 14:34:54.387798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.016  [2024-11-20 14:34:54.387872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:24:16.016  [2024-11-20 14:34:54.387893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1961.210 ms
00:24:16.016  [2024-11-20 14:34:54.387908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.016  [2024-11-20 14:34:54.421043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.016  [2024-11-20 14:34:54.421112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:16.016  [2024-11-20 14:34:54.421134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.858 ms
00:24:16.016  [2024-11-20 14:34:54.421150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.016  [2024-11-20 14:34:54.421351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.016  [2024-11-20 14:34:54.421380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:16.016  [2024-11-20 14:34:54.421395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.068 ms
00:24:16.017  [2024-11-20 14:34:54.421416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.463089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.463161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:16.017  [2024-11-20 14:34:54.463182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 41.575 ms
00:24:16.017  [2024-11-20 14:34:54.463197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.463271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.463300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:16.017  [2024-11-20 14:34:54.463314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:24:16.017  [2024-11-20 14:34:54.463328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.463765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.463793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:16.017  [2024-11-20 14:34:54.463807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.331 ms
00:24:16.017  [2024-11-20 14:34:54.463821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.463959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.463978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:16.017  [2024-11-20 14:34:54.463994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.109 ms
00:24:16.017  [2024-11-20 14:34:54.464009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.482402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.482684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:16.017  [2024-11-20 14:34:54.482717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.365 ms
00:24:16.017  [2024-11-20 14:34:54.482734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.508147] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:24:16.017  [2024-11-20 14:34:54.511148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.511199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:16.017  [2024-11-20 14:34:54.511226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 28.257 ms
00:24:16.017  [2024-11-20 14:34:54.511238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.567242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.567489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:24:16.017  [2024-11-20 14:34:54.567528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 55.930 ms
00:24:16.017  [2024-11-20 14:34:54.567543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.567805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.567846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:16.017  [2024-11-20 14:34:54.567870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.171 ms
00:24:16.017  [2024-11-20 14:34:54.567882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.599777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.599842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:24:16.017  [2024-11-20 14:34:54.599866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.806 ms
00:24:16.017  [2024-11-20 14:34:54.599879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.630819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.630881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:24:16.017  [2024-11-20 14:34:54.630905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.871 ms
00:24:16.017  [2024-11-20 14:34:54.630917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.631697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.631734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:16.017  [2024-11-20 14:34:54.631754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.721 ms
00:24:16.017  [2024-11-20 14:34:54.631769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.711449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.711522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:24:16.017  [2024-11-20 14:34:54.711556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 79.574 ms
00:24:16.017  [2024-11-20 14:34:54.711585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.743652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.743725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:24:16.017  [2024-11-20 14:34:54.743752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.933 ms
00:24:16.017  [2024-11-20 14:34:54.743766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.775076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.775137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:24:16.017  [2024-11-20 14:34:54.775160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.246 ms
00:24:16.017  [2024-11-20 14:34:54.775173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.806812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.806883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:16.017  [2024-11-20 14:34:54.806908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.574 ms
00:24:16.017  [2024-11-20 14:34:54.806921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.807001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.807020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:16.017  [2024-11-20 14:34:54.807041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:24:16.017  [2024-11-20 14:34:54.807053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.807194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.017  [2024-11-20 14:34:54.807216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:16.017  [2024-11-20 14:34:54.807236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:24:16.017  [2024-11-20 14:34:54.807247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.017  [2024-11-20 14:34:54.808392] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2395.906 ms, result 0
00:24:16.017  {
00:24:16.017    "name": "ftl0",
00:24:16.017    "uuid": "cd5e6f69-30c0-44af-9535-aa51982d8157"
00:24:16.017  }
00:24:16.017   14:34:54 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": ['
00:24:16.017   14:34:54 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:24:16.275   14:34:55 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}'
00:24:16.275   14:34:55 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:24:16.532  [2024-11-20 14:34:55.464257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.532  [2024-11-20 14:34:55.464358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:24:16.532  [2024-11-20 14:34:55.464391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:16.532  [2024-11-20 14:34:55.464432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.532  [2024-11-20 14:34:55.464488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:24:16.532  [2024-11-20 14:34:55.468538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.532  [2024-11-20 14:34:55.468615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:24:16.532  [2024-11-20 14:34:55.468640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.007 ms
00:24:16.532  [2024-11-20 14:34:55.468653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.532  [2024-11-20 14:34:55.469029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.532  [2024-11-20 14:34:55.469062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:24:16.532  [2024-11-20 14:34:55.469091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.322 ms
00:24:16.532  [2024-11-20 14:34:55.469103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.532  [2024-11-20 14:34:55.472425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.532  [2024-11-20 14:34:55.472462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:24:16.532  [2024-11-20 14:34:55.472481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.293 ms
00:24:16.532  [2024-11-20 14:34:55.472493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.532  [2024-11-20 14:34:55.479248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.532  [2024-11-20 14:34:55.479287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:24:16.532  [2024-11-20 14:34:55.479310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.722 ms
00:24:16.532  [2024-11-20 14:34:55.479322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.532  [2024-11-20 14:34:55.510817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.532  [2024-11-20 14:34:55.510874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:24:16.532  [2024-11-20 14:34:55.510896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.397 ms
00:24:16.532  [2024-11-20 14:34:55.510909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.792  [2024-11-20 14:34:55.529710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.792  [2024-11-20 14:34:55.529934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:24:16.792  [2024-11-20 14:34:55.529971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.734 ms
00:24:16.792  [2024-11-20 14:34:55.529985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.792  [2024-11-20 14:34:55.530209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.792  [2024-11-20 14:34:55.530233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:24:16.792  [2024-11-20 14:34:55.530250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.149 ms
00:24:16.792  [2024-11-20 14:34:55.530262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.792  [2024-11-20 14:34:55.562481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.792  [2024-11-20 14:34:55.562559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:24:16.792  [2024-11-20 14:34:55.562601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.174 ms
00:24:16.792  [2024-11-20 14:34:55.562615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.792  [2024-11-20 14:34:55.594344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.792  [2024-11-20 14:34:55.594423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:24:16.792  [2024-11-20 14:34:55.594448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.629 ms
00:24:16.792  [2024-11-20 14:34:55.594460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.792  [2024-11-20 14:34:55.625760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.793  [2024-11-20 14:34:55.625832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:24:16.793  [2024-11-20 14:34:55.625857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.204 ms
00:24:16.793  [2024-11-20 14:34:55.625869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.793  [2024-11-20 14:34:55.664138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.793  [2024-11-20 14:34:55.664220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:24:16.793  [2024-11-20 14:34:55.664256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.105 ms
00:24:16.793  [2024-11-20 14:34:55.664277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.793  [2024-11-20 14:34:55.664357] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:24:16.793  [2024-11-20 14:34:55.664392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.664987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.665988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.793  [2024-11-20 14:34:55.666359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:24:16.794  [2024-11-20 14:34:55.666824] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:24:16.794  [2024-11-20 14:34:55.666854] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         cd5e6f69-30c0-44af-9535-aa51982d8157
00:24:16.794  [2024-11-20 14:34:55.666875] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:24:16.794  [2024-11-20 14:34:55.666900] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:24:16.794  [2024-11-20 14:34:55.666920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:24:16.794  [2024-11-20 14:34:55.666949] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:24:16.794  [2024-11-20 14:34:55.666967] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:24:16.794  [2024-11-20 14:34:55.666988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:24:16.794  [2024-11-20 14:34:55.667007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:24:16.794  [2024-11-20 14:34:55.667027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:24:16.794  [2024-11-20 14:34:55.667044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:24:16.794  [2024-11-20 14:34:55.667068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.794  [2024-11-20 14:34:55.667088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:24:16.794  [2024-11-20 14:34:55.667112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.715 ms
00:24:16.794  [2024-11-20 14:34:55.667131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.794  [2024-11-20 14:34:55.692057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.794  [2024-11-20 14:34:55.692173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:24:16.794  [2024-11-20 14:34:55.692212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 24.797 ms
00:24:16.794  [2024-11-20 14:34:55.692234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.794  [2024-11-20 14:34:55.693001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:16.794  [2024-11-20 14:34:55.693046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:24:16.794  [2024-11-20 14:34:55.693291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.668 ms
00:24:16.794  [2024-11-20 14:34:55.693312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.794  [2024-11-20 14:34:55.753849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.794  [2024-11-20 14:34:55.753944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:16.794  [2024-11-20 14:34:55.753969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.794  [2024-11-20 14:34:55.753984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.794  [2024-11-20 14:34:55.754093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.794  [2024-11-20 14:34:55.754121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:16.794  [2024-11-20 14:34:55.754140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.794  [2024-11-20 14:34:55.754152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.794  [2024-11-20 14:34:55.754339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.794  [2024-11-20 14:34:55.754361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:16.794  [2024-11-20 14:34:55.754378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.794  [2024-11-20 14:34:55.754390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:16.794  [2024-11-20 14:34:55.754424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:16.794  [2024-11-20 14:34:55.754439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:16.794  [2024-11-20 14:34:55.754453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:16.794  [2024-11-20 14:34:55.754465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.859954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.860031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:17.053  [2024-11-20 14:34:55.860055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.860067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.946224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.946302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:17.053  [2024-11-20 14:34:55.946325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.946341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.946484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.946505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:17.053  [2024-11-20 14:34:55.946521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.946533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.946646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.946667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:17.053  [2024-11-20 14:34:55.946683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.946694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.946829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.946849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:17.053  [2024-11-20 14:34:55.946865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.946876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.946933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.946952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:24:17.053  [2024-11-20 14:34:55.946967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.946980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.947035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.947051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:17.053  [2024-11-20 14:34:55.947065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.947076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.947139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:24:17.053  [2024-11-20 14:34:55.947157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:17.053  [2024-11-20 14:34:55.947173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:24:17.053  [2024-11-20 14:34:55.947184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:17.053  [2024-11-20 14:34:55.947354] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 483.079 ms, result 0
00:24:17.053  true
00:24:17.053   14:34:55 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79346
00:24:17.053   14:34:55 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79346 ']'
00:24:17.053   14:34:55 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79346
00:24:17.053    14:34:55 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname
00:24:17.053   14:34:55 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:17.053    14:34:55 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79346
00:24:17.053   14:34:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:17.053   14:34:56 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:17.053   14:34:56 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79346'
00:24:17.053  killing process with pid 79346
00:24:17.053   14:34:56 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79346
00:24:17.053   14:34:56 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79346
00:24:22.319   14:35:00 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K
00:24:26.511  262144+0 records in
00:24:26.511  262144+0 records out
00:24:26.511  1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.16612 s, 208 MB/s
00:24:26.511   14:35:05 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:24:29.039   14:35:07 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:24:29.039  [2024-11-20 14:35:07.785092] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:24:29.039  [2024-11-20 14:35:07.785305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79594 ]
00:24:29.039  [2024-11-20 14:35:07.973631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:29.297  [2024-11-20 14:35:08.079528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:29.555  [2024-11-20 14:35:08.426509] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:29.555  [2024-11-20 14:35:08.426644] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:24:29.815  [2024-11-20 14:35:08.596126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.596203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:24:29.815  [2024-11-20 14:35:08.596237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:24:29.815  [2024-11-20 14:35:08.596251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.596343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.596365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:24:29.815  [2024-11-20 14:35:08.596389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.060 ms
00:24:29.815  [2024-11-20 14:35:08.596401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.596434] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:24:29.815  [2024-11-20 14:35:08.597437] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:24:29.815  [2024-11-20 14:35:08.597474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.597489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:24:29.815  [2024-11-20 14:35:08.597503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.047 ms
00:24:29.815  [2024-11-20 14:35:08.597515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.598812] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:24:29.815  [2024-11-20 14:35:08.616162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.616256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:24:29.815  [2024-11-20 14:35:08.616280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.349 ms
00:24:29.815  [2024-11-20 14:35:08.616302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.616427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.616449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:24:29.815  [2024-11-20 14:35:08.616464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.040 ms
00:24:29.815  [2024-11-20 14:35:08.616476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.621230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.621310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:24:29.815  [2024-11-20 14:35:08.621336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.600 ms
00:24:29.815  [2024-11-20 14:35:08.621373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.621549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.621608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:24:29.815  [2024-11-20 14:35:08.621631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.122 ms
00:24:29.815  [2024-11-20 14:35:08.621645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.621739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.621762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:24:29.815  [2024-11-20 14:35:08.621776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.030 ms
00:24:29.815  [2024-11-20 14:35:08.621787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.621848] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:24:29.815  [2024-11-20 14:35:08.628642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.628704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:24:29.815  [2024-11-20 14:35:08.628737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.816 ms
00:24:29.815  [2024-11-20 14:35:08.628782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.628851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.628875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:24:29.815  [2024-11-20 14:35:08.628893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.017 ms
00:24:29.815  [2024-11-20 14:35:08.628907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.629009] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:24:29.815  [2024-11-20 14:35:08.629066] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:24:29.815  [2024-11-20 14:35:08.629136] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:24:29.815  [2024-11-20 14:35:08.629198] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:24:29.815  [2024-11-20 14:35:08.629358] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:24:29.815  [2024-11-20 14:35:08.629393] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:24:29.815  [2024-11-20 14:35:08.629424] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:24:29.815  [2024-11-20 14:35:08.629448] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:24:29.815  [2024-11-20 14:35:08.629470] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:24:29.815  [2024-11-20 14:35:08.629489] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:24:29.815  [2024-11-20 14:35:08.629508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:24:29.815  [2024-11-20 14:35:08.629526] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:24:29.815  [2024-11-20 14:35:08.629560] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:24:29.815  [2024-11-20 14:35:08.629611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.629633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:24:29.815  [2024-11-20 14:35:08.629654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.608 ms
00:24:29.815  [2024-11-20 14:35:08.629671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.629961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.815  [2024-11-20 14:35:08.629993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:24:29.815  [2024-11-20 14:35:08.630020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.235 ms
00:24:29.815  [2024-11-20 14:35:08.630038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.815  [2024-11-20 14:35:08.630359] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:24:29.815  [2024-11-20 14:35:08.630392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:24:29.815  [2024-11-20 14:35:08.630412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:24:29.815  [2024-11-20 14:35:08.630470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:24:29.815  [2024-11-20 14:35:08.630522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:29.815  [2024-11-20 14:35:08.630557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:24:29.815  [2024-11-20 14:35:08.630593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:24:29.815  [2024-11-20 14:35:08.630613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:24:29.815  [2024-11-20 14:35:08.630633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:24:29.815  [2024-11-20 14:35:08.630651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:24:29.815  [2024-11-20 14:35:08.630693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:24:29.815  [2024-11-20 14:35:08.630729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:24:29.815  [2024-11-20 14:35:08.630783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:24:29.815  [2024-11-20 14:35:08.630850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:24:29.815  [2024-11-20 14:35:08.630901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:24:29.815  [2024-11-20 14:35:08.630955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:24:29.815  [2024-11-20 14:35:08.630972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:24:29.815  [2024-11-20 14:35:08.630992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:24:29.815  [2024-11-20 14:35:08.631010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:24:29.816  [2024-11-20 14:35:08.631027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:29.816  [2024-11-20 14:35:08.631045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:24:29.816  [2024-11-20 14:35:08.631063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:24:29.816  [2024-11-20 14:35:08.631082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:24:29.816  [2024-11-20 14:35:08.631102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:24:29.816  [2024-11-20 14:35:08.631119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:24:29.816  [2024-11-20 14:35:08.631136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:29.816  [2024-11-20 14:35:08.631154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:24:29.816  [2024-11-20 14:35:08.631179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:24:29.816  [2024-11-20 14:35:08.631196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:29.816  [2024-11-20 14:35:08.631213] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:24:29.816  [2024-11-20 14:35:08.631230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:24:29.816  [2024-11-20 14:35:08.631252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:24:29.816  [2024-11-20 14:35:08.631271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:24:29.816  [2024-11-20 14:35:08.631292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:24:29.816  [2024-11-20 14:35:08.631310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:24:29.816  [2024-11-20 14:35:08.631328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:24:29.816  [2024-11-20 14:35:08.631346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:24:29.816  [2024-11-20 14:35:08.631363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:24:29.816  [2024-11-20 14:35:08.631380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:24:29.816  [2024-11-20 14:35:08.631402] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:24:29.816  [2024-11-20 14:35:08.631440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:24:29.816  [2024-11-20 14:35:08.631482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:24:29.816  [2024-11-20 14:35:08.631502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:24:29.816  [2024-11-20 14:35:08.631521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:24:29.816  [2024-11-20 14:35:08.631540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:24:29.816  [2024-11-20 14:35:08.631560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:24:29.816  [2024-11-20 14:35:08.631617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:24:29.816  [2024-11-20 14:35:08.631639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:24:29.816  [2024-11-20 14:35:08.631660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:24:29.816  [2024-11-20 14:35:08.631689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:24:29.816  [2024-11-20 14:35:08.631787] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:24:29.816  [2024-11-20 14:35:08.631825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:24:29.816  [2024-11-20 14:35:08.631866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:24:29.816  [2024-11-20 14:35:08.631884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:24:29.816  [2024-11-20 14:35:08.631903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:24:29.816  [2024-11-20 14:35:08.631923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.631943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:24:29.816  [2024-11-20 14:35:08.631962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.768 ms
00:24:29.816  [2024-11-20 14:35:08.631980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.680076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.680446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:24:29.816  [2024-11-20 14:35:08.680492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 47.957 ms
00:24:29.816  [2024-11-20 14:35:08.680513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.680842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.680870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:24:29.816  [2024-11-20 14:35:08.680891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.211 ms
00:24:29.816  [2024-11-20 14:35:08.680908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.751857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.752181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:24:29.816  [2024-11-20 14:35:08.752227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 70.796 ms
00:24:29.816  [2024-11-20 14:35:08.752253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.752370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.752397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:24:29.816  [2024-11-20 14:35:08.752438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:24:29.816  [2024-11-20 14:35:08.752457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.753071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.753110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:24:29.816  [2024-11-20 14:35:08.753132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.394 ms
00:24:29.816  [2024-11-20 14:35:08.753150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.753483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.753521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:24:29.816  [2024-11-20 14:35:08.753543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.276 ms
00:24:29.816  [2024-11-20 14:35:08.753597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:29.816  [2024-11-20 14:35:08.779532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:29.816  [2024-11-20 14:35:08.779672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:24:29.816  [2024-11-20 14:35:08.779712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 25.877 ms
00:24:29.816  [2024-11-20 14:35:08.779733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.803352] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4
00:24:30.075  [2024-11-20 14:35:08.803701] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:24:30.075  [2024-11-20 14:35:08.803743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.803763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:24:30.075  [2024-11-20 14:35:08.803787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 23.704 ms
00:24:30.075  [2024-11-20 14:35:08.803805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.836390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.836506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:24:30.075  [2024-11-20 14:35:08.836529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.487 ms
00:24:30.075  [2024-11-20 14:35:08.836542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.852558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.852655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:24:30.075  [2024-11-20 14:35:08.852677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.919 ms
00:24:30.075  [2024-11-20 14:35:08.852690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.869144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.869230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:24:30.075  [2024-11-20 14:35:08.869252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.386 ms
00:24:30.075  [2024-11-20 14:35:08.869265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.870244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.870284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:24:30.075  [2024-11-20 14:35:08.870302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.746 ms
00:24:30.075  [2024-11-20 14:35:08.870314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.945731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.945815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:24:30.075  [2024-11-20 14:35:08.945838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 75.381 ms
00:24:30.075  [2024-11-20 14:35:08.945862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.958837] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:24:30.075  [2024-11-20 14:35:08.962038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.962100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:24:30.075  [2024-11-20 14:35:08.962121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.088 ms
00:24:30.075  [2024-11-20 14:35:08.962135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.962277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.962302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:24:30.075  [2024-11-20 14:35:08.962317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:24:30.075  [2024-11-20 14:35:08.962329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.962460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.962483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:24:30.075  [2024-11-20 14:35:08.962498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.041 ms
00:24:30.075  [2024-11-20 14:35:08.962510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.962547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.962565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:24:30.075  [2024-11-20 14:35:08.962849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:24:30.075  [2024-11-20 14:35:08.962974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.963045] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:24:30.075  [2024-11-20 14:35:08.963066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.963084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:24:30.075  [2024-11-20 14:35:08.963097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:24:30.075  [2024-11-20 14:35:08.963109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.995605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.995685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:24:30.075  [2024-11-20 14:35:08.995706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.462 ms
00:24:30.075  [2024-11-20 14:35:08.995720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.995859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:24:30.075  [2024-11-20 14:35:08.995881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:24:30.075  [2024-11-20 14:35:08.995896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.048 ms
00:24:30.075  [2024-11-20 14:35:08.995908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:24:30.075  [2024-11-20 14:35:08.997306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.658 ms, result 0
00:24:31.029  
[2024-11-20T14:35:11.385Z] Copying: 30/1024 [MB] (30 MBps)
[2024-11-20T14:35:12.317Z] Copying: 61/1024 [MB] (31 MBps)
[2024-11-20T14:35:13.252Z] Copying: 91/1024 [MB] (30 MBps)
[2024-11-20T14:35:14.184Z] Copying: 121/1024 [MB] (29 MBps)
[2024-11-20T14:35:15.116Z] Copying: 152/1024 [MB] (30 MBps)
[2024-11-20T14:35:16.050Z] Copying: 182/1024 [MB] (30 MBps)
[2024-11-20T14:35:17.423Z] Copying: 212/1024 [MB] (29 MBps)
[2024-11-20T14:35:18.355Z] Copying: 238/1024 [MB] (26 MBps)
[2024-11-20T14:35:19.288Z] Copying: 268/1024 [MB] (30 MBps)
[2024-11-20T14:35:20.225Z] Copying: 299/1024 [MB] (30 MBps)
[2024-11-20T14:35:21.157Z] Copying: 330/1024 [MB] (31 MBps)
[2024-11-20T14:35:22.089Z] Copying: 360/1024 [MB] (29 MBps)
[2024-11-20T14:35:23.023Z] Copying: 392/1024 [MB] (32 MBps)
[2024-11-20T14:35:24.436Z] Copying: 422/1024 [MB] (29 MBps)
[2024-11-20T14:35:25.031Z] Copying: 451/1024 [MB] (29 MBps)
[2024-11-20T14:35:26.405Z] Copying: 479/1024 [MB] (27 MBps)
[2024-11-20T14:35:27.337Z] Copying: 508/1024 [MB] (29 MBps)
[2024-11-20T14:35:28.269Z] Copying: 539/1024 [MB] (31 MBps)
[2024-11-20T14:35:29.201Z] Copying: 571/1024 [MB] (31 MBps)
[2024-11-20T14:35:30.133Z] Copying: 602/1024 [MB] (31 MBps)
[2024-11-20T14:35:31.064Z] Copying: 632/1024 [MB] (30 MBps)
[2024-11-20T14:35:32.433Z] Copying: 663/1024 [MB] (31 MBps)
[2024-11-20T14:35:33.365Z] Copying: 694/1024 [MB] (31 MBps)
[2024-11-20T14:35:34.300Z] Copying: 724/1024 [MB] (29 MBps)
[2024-11-20T14:35:35.234Z] Copying: 750/1024 [MB] (26 MBps)
[2024-11-20T14:35:36.167Z] Copying: 782/1024 [MB] (31 MBps)
[2024-11-20T14:35:37.150Z] Copying: 811/1024 [MB] (29 MBps)
[2024-11-20T14:35:38.081Z] Copying: 839/1024 [MB] (28 MBps)
[2024-11-20T14:35:39.014Z] Copying: 869/1024 [MB] (29 MBps)
[2024-11-20T14:35:40.385Z] Copying: 900/1024 [MB] (30 MBps)
[2024-11-20T14:35:41.318Z] Copying: 931/1024 [MB] (31 MBps)
[2024-11-20T14:35:42.252Z] Copying: 962/1024 [MB] (30 MBps)
[2024-11-20T14:35:43.187Z] Copying: 993/1024 [MB] (30 MBps)
[2024-11-20T14:35:43.187Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 14:35:42.973526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:42.973597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:25:04.205  [2024-11-20 14:35:42.973621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.003 ms
00:25:04.205  [2024-11-20 14:35:42.973634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:42.973665] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:25:04.205  [2024-11-20 14:35:42.977024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:42.977062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:25:04.205  [2024-11-20 14:35:42.977079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.334 ms
00:25:04.205  [2024-11-20 14:35:42.977106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:42.978616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:42.978788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:25:04.205  [2024-11-20 14:35:42.978818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.481 ms
00:25:04.205  [2024-11-20 14:35:42.978831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:42.993557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:42.993736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:25:04.205  [2024-11-20 14:35:42.993766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 14.695 ms
00:25:04.205  [2024-11-20 14:35:42.993781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.000944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.001114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:25:04.205  [2024-11-20 14:35:43.001143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 7.093 ms
00:25:04.205  [2024-11-20 14:35:43.001157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.032696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.032758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:25:04.205  [2024-11-20 14:35:43.032778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.444 ms
00:25:04.205  [2024-11-20 14:35:43.032790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.050418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.050467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:25:04.205  [2024-11-20 14:35:43.050487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.571 ms
00:25:04.205  [2024-11-20 14:35:43.050500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.050678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.050701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:25:04.205  [2024-11-20 14:35:43.050733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.127 ms
00:25:04.205  [2024-11-20 14:35:43.050746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.082190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.082246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:25:04.205  [2024-11-20 14:35:43.082265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.419 ms
00:25:04.205  [2024-11-20 14:35:43.082278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.113576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.113632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:25:04.205  [2024-11-20 14:35:43.113675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.234 ms
00:25:04.205  [2024-11-20 14:35:43.113688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.144313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.144522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:25:04.205  [2024-11-20 14:35:43.144552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.575 ms
00:25:04.205  [2024-11-20 14:35:43.144565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.176137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.205  [2024-11-20 14:35:43.176200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:25:04.205  [2024-11-20 14:35:43.176220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.434 ms
00:25:04.205  [2024-11-20 14:35:43.176233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.205  [2024-11-20 14:35:43.176288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:25:04.205  [2024-11-20 14:35:43.176313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.205  [2024-11-20 14:35:43.176441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.176995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:25:04.206  [2024-11-20 14:35:43.177626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:25:04.207  [2024-11-20 14:35:43.177649] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         cd5e6f69-30c0-44af-9535-aa51982d8157
00:25:04.207  [2024-11-20 14:35:43.177671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:25:04.207  [2024-11-20 14:35:43.177683] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:25:04.207  [2024-11-20 14:35:43.177694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:25:04.207  [2024-11-20 14:35:43.177706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:25:04.207  [2024-11-20 14:35:43.177717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:25:04.207  [2024-11-20 14:35:43.177729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:25:04.207  [2024-11-20 14:35:43.177741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:25:04.207  [2024-11-20 14:35:43.177769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:25:04.207  [2024-11-20 14:35:43.177780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:25:04.207  [2024-11-20 14:35:43.177792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.207  [2024-11-20 14:35:43.177805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:25:04.207  [2024-11-20 14:35:43.177818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.506 ms
00:25:04.207  [2024-11-20 14:35:43.177830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.194747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.465  [2024-11-20 14:35:43.194920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:25:04.465  [2024-11-20 14:35:43.194950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.835 ms
00:25:04.465  [2024-11-20 14:35:43.194964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.195407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:04.465  [2024-11-20 14:35:43.195447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:25:04.465  [2024-11-20 14:35:43.195463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.410 ms
00:25:04.465  [2024-11-20 14:35:43.195475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.238712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.238775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:25:04.465  [2024-11-20 14:35:43.238794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.238806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.238882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.238897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:25:04.465  [2024-11-20 14:35:43.238911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.238923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.239030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.239051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:25:04.465  [2024-11-20 14:35:43.239064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.239076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.239099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.239113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:25:04.465  [2024-11-20 14:35:43.239125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.239137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.344746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.344829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:25:04.465  [2024-11-20 14:35:43.344850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.344870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.430249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.430492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:25:04.465  [2024-11-20 14:35:43.430523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.430538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.430677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.430698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:25:04.465  [2024-11-20 14:35:43.430712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.430723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.430774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.430790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:25:04.465  [2024-11-20 14:35:43.430803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.430815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.430942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.430968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:25:04.465  [2024-11-20 14:35:43.430982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.430994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.431046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.431065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:25:04.465  [2024-11-20 14:35:43.431077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.431089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.431133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.431155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:25:04.465  [2024-11-20 14:35:43.431167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.431179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.431232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:04.465  [2024-11-20 14:35:43.431249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:25:04.465  [2024-11-20 14:35:43.431262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:04.465  [2024-11-20 14:35:43.431273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:04.465  [2024-11-20 14:35:43.431426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 457.855 ms, result 0
00:25:05.840  
00:25:05.840  
00:25:05.840   14:35:44 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144
00:25:05.840  [2024-11-20 14:35:44.567655] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:25:05.840  [2024-11-20 14:35:44.567853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79958 ]
00:25:05.840  [2024-11-20 14:35:44.757382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:06.098  [2024-11-20 14:35:44.861154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:06.356  [2024-11-20 14:35:45.186548] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:25:06.356  [2024-11-20 14:35:45.186650] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:25:06.616  [2024-11-20 14:35:45.347628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.347874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:25:06.616  [2024-11-20 14:35:45.347921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:25:06.616  [2024-11-20 14:35:45.347935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.348038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.348057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:25:06.616  [2024-11-20 14:35:45.348074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.054 ms
00:25:06.616  [2024-11-20 14:35:45.348086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.348119] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:25:06.616  [2024-11-20 14:35:45.349109] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:25:06.616  [2024-11-20 14:35:45.349154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.349169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:25:06.616  [2024-11-20 14:35:45.349183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.043 ms
00:25:06.616  [2024-11-20 14:35:45.349194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.350366] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:25:06.616  [2024-11-20 14:35:45.366912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.367137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:25:06.616  [2024-11-20 14:35:45.367169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.544 ms
00:25:06.616  [2024-11-20 14:35:45.367182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.367315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.367335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:25:06.616  [2024-11-20 14:35:45.367348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.032 ms
00:25:06.616  [2024-11-20 14:35:45.367360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.371973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.372030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:25:06.616  [2024-11-20 14:35:45.372047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.485 ms
00:25:06.616  [2024-11-20 14:35:45.372068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.372175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.372195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:25:06.616  [2024-11-20 14:35:45.372208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:25:06.616  [2024-11-20 14:35:45.372219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.372289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.372307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:25:06.616  [2024-11-20 14:35:45.372319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:25:06.616  [2024-11-20 14:35:45.372330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.372371] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:25:06.616  [2024-11-20 14:35:45.376707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.376749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:25:06.616  [2024-11-20 14:35:45.376767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.352 ms
00:25:06.616  [2024-11-20 14:35:45.376784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.376825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.376839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:25:06.616  [2024-11-20 14:35:45.376852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:25:06.616  [2024-11-20 14:35:45.376869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.376929] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:25:06.616  [2024-11-20 14:35:45.376962] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:25:06.616  [2024-11-20 14:35:45.377007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:25:06.616  [2024-11-20 14:35:45.377031] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:25:06.616  [2024-11-20 14:35:45.377146] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:25:06.616  [2024-11-20 14:35:45.377162] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:25:06.616  [2024-11-20 14:35:45.377177] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:25:06.616  [2024-11-20 14:35:45.377191] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377204] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377217] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:25:06.616  [2024-11-20 14:35:45.377228] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:25:06.616  [2024-11-20 14:35:45.377238] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:25:06.616  [2024-11-20 14:35:45.377253] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:25:06.616  [2024-11-20 14:35:45.377266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.377277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:25:06.616  [2024-11-20 14:35:45.377289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.342 ms
00:25:06.616  [2024-11-20 14:35:45.377300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.377399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.616  [2024-11-20 14:35:45.377414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:25:06.616  [2024-11-20 14:35:45.377426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:25:06.616  [2024-11-20 14:35:45.377437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.616  [2024-11-20 14:35:45.377619] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:25:06.616  [2024-11-20 14:35:45.377658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:25:06.616  [2024-11-20 14:35:45.377681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:25:06.616  [2024-11-20 14:35:45.377715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:25:06.616  [2024-11-20 14:35:45.377748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:25:06.616  [2024-11-20 14:35:45.377769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:25:06.616  [2024-11-20 14:35:45.377779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:25:06.616  [2024-11-20 14:35:45.377788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:25:06.616  [2024-11-20 14:35:45.377799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:25:06.616  [2024-11-20 14:35:45.377809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:25:06.616  [2024-11-20 14:35:45.377833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:25:06.616  [2024-11-20 14:35:45.377855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:25:06.616  [2024-11-20 14:35:45.377904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:25:06.616  [2024-11-20 14:35:45.377936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:25:06.616  [2024-11-20 14:35:45.377966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:25:06.616  [2024-11-20 14:35:45.377977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:06.616  [2024-11-20 14:35:45.377989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:25:06.616  [2024-11-20 14:35:45.378000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:25:06.616  [2024-11-20 14:35:45.378010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:06.616  [2024-11-20 14:35:45.378020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:25:06.616  [2024-11-20 14:35:45.378030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:25:06.616  [2024-11-20 14:35:45.378040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:25:06.616  [2024-11-20 14:35:45.378050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:25:06.616  [2024-11-20 14:35:45.378060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:25:06.616  [2024-11-20 14:35:45.378070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:25:06.617  [2024-11-20 14:35:45.378081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:25:06.617  [2024-11-20 14:35:45.378091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:25:06.617  [2024-11-20 14:35:45.378101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:06.617  [2024-11-20 14:35:45.378111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:25:06.617  [2024-11-20 14:35:45.378122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:25:06.617  [2024-11-20 14:35:45.378132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:06.617  [2024-11-20 14:35:45.378143] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:25:06.617  [2024-11-20 14:35:45.378154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:25:06.617  [2024-11-20 14:35:45.378165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:25:06.617  [2024-11-20 14:35:45.378176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:06.617  [2024-11-20 14:35:45.378187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:25:06.617  [2024-11-20 14:35:45.378198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:25:06.617  [2024-11-20 14:35:45.378208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:25:06.617  [2024-11-20 14:35:45.378226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:25:06.617  [2024-11-20 14:35:45.378236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:25:06.617  [2024-11-20 14:35:45.378247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:25:06.617  [2024-11-20 14:35:45.378259] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:25:06.617  [2024-11-20 14:35:45.378273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:25:06.617  [2024-11-20 14:35:45.378297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:25:06.617  [2024-11-20 14:35:45.378308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:25:06.617  [2024-11-20 14:35:45.378318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:25:06.617  [2024-11-20 14:35:45.378329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:25:06.617  [2024-11-20 14:35:45.378341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:25:06.617  [2024-11-20 14:35:45.378353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:25:06.617  [2024-11-20 14:35:45.378364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:25:06.617  [2024-11-20 14:35:45.378375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:25:06.617  [2024-11-20 14:35:45.378386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:25:06.617  [2024-11-20 14:35:45.378441] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:25:06.617  [2024-11-20 14:35:45.378460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:25:06.617  [2024-11-20 14:35:45.378483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:25:06.617  [2024-11-20 14:35:45.378494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:25:06.617  [2024-11-20 14:35:45.378505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:25:06.617  [2024-11-20 14:35:45.378517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.378529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:25:06.617  [2024-11-20 14:35:45.378540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.996 ms
00:25:06.617  [2024-11-20 14:35:45.378552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.411982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.412043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:25:06.617  [2024-11-20 14:35:45.412064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.348 ms
00:25:06.617  [2024-11-20 14:35:45.412076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.412198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.412213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:25:06.617  [2024-11-20 14:35:45.412226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.065 ms
00:25:06.617  [2024-11-20 14:35:45.412237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.463150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.463223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:25:06.617  [2024-11-20 14:35:45.463244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 50.818 ms
00:25:06.617  [2024-11-20 14:35:45.463256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.463337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.463355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:25:06.617  [2024-11-20 14:35:45.463374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:25:06.617  [2024-11-20 14:35:45.463386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.463865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.463894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:25:06.617  [2024-11-20 14:35:45.463909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.362 ms
00:25:06.617  [2024-11-20 14:35:45.463920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.464079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.464105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:25:06.617  [2024-11-20 14:35:45.464118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.127 ms
00:25:06.617  [2024-11-20 14:35:45.464136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.481352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.481425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:25:06.617  [2024-11-20 14:35:45.481450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.174 ms
00:25:06.617  [2024-11-20 14:35:45.481462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.498210] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:25:06.617  [2024-11-20 14:35:45.498277] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:25:06.617  [2024-11-20 14:35:45.498299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.498313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:25:06.617  [2024-11-20 14:35:45.498328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.626 ms
00:25:06.617  [2024-11-20 14:35:45.498339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.528907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.528980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:25:06.617  [2024-11-20 14:35:45.529000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.498 ms
00:25:06.617  [2024-11-20 14:35:45.529013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.545332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.545387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:25:06.617  [2024-11-20 14:35:45.545407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.242 ms
00:25:06.617  [2024-11-20 14:35:45.545419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.561755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.561952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:25:06.617  [2024-11-20 14:35:45.561984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.283 ms
00:25:06.617  [2024-11-20 14:35:45.561996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.617  [2024-11-20 14:35:45.562906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.617  [2024-11-20 14:35:45.562945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:25:06.617  [2024-11-20 14:35:45.562961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.730 ms
00:25:06.617  [2024-11-20 14:35:45.562977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.638850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.638974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:25:06.876  [2024-11-20 14:35:45.639018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 75.834 ms
00:25:06.876  [2024-11-20 14:35:45.639041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.652397] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:25:06.876  [2024-11-20 14:35:45.655080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.655124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:25:06.876  [2024-11-20 14:35:45.655145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.930 ms
00:25:06.876  [2024-11-20 14:35:45.655157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.655294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.655315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:25:06.876  [2024-11-20 14:35:45.655329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:25:06.876  [2024-11-20 14:35:45.655344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.655448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.655468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:25:06.876  [2024-11-20 14:35:45.655481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.048 ms
00:25:06.876  [2024-11-20 14:35:45.655492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.655525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.655539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:25:06.876  [2024-11-20 14:35:45.655552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:25:06.876  [2024-11-20 14:35:45.655562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.655630] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:25:06.876  [2024-11-20 14:35:45.655648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.655659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:25:06.876  [2024-11-20 14:35:45.655670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:25:06.876  [2024-11-20 14:35:45.655681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.686764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.686833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:25:06.876  [2024-11-20 14:35:45.686853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.052 ms
00:25:06.876  [2024-11-20 14:35:45.686873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.686978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:06.876  [2024-11-20 14:35:45.686999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:25:06.876  [2024-11-20 14:35:45.687012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:25:06.876  [2024-11-20 14:35:45.687023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:06.876  [2024-11-20 14:35:45.688293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.181 ms, result 0
00:25:08.250  
[2024-11-20T14:35:48.165Z] Copying: 27/1024 [MB] (27 MBps)
[2024-11-20T14:35:49.099Z] Copying: 53/1024 [MB] (25 MBps)
[2024-11-20T14:35:50.031Z] Copying: 82/1024 [MB] (28 MBps)
[2024-11-20T14:35:50.965Z] Copying: 111/1024 [MB] (29 MBps)
[2024-11-20T14:35:52.342Z] Copying: 138/1024 [MB] (27 MBps)
[2024-11-20T14:35:53.275Z] Copying: 163/1024 [MB] (24 MBps)
[2024-11-20T14:35:54.207Z] Copying: 189/1024 [MB] (26 MBps)
[2024-11-20T14:35:55.141Z] Copying: 218/1024 [MB] (28 MBps)
[2024-11-20T14:35:56.074Z] Copying: 245/1024 [MB] (27 MBps)
[2024-11-20T14:35:57.008Z] Copying: 273/1024 [MB] (28 MBps)
[2024-11-20T14:35:57.942Z] Copying: 298/1024 [MB] (24 MBps)
[2024-11-20T14:35:59.315Z] Copying: 327/1024 [MB] (29 MBps)
[2024-11-20T14:36:00.249Z] Copying: 355/1024 [MB] (27 MBps)
[2024-11-20T14:36:01.182Z] Copying: 382/1024 [MB] (27 MBps)
[2024-11-20T14:36:02.114Z] Copying: 407/1024 [MB] (24 MBps)
[2024-11-20T14:36:03.046Z] Copying: 434/1024 [MB] (27 MBps)
[2024-11-20T14:36:03.981Z] Copying: 461/1024 [MB] (26 MBps)
[2024-11-20T14:36:04.914Z] Copying: 487/1024 [MB] (26 MBps)
[2024-11-20T14:36:06.289Z] Copying: 512/1024 [MB] (25 MBps)
[2024-11-20T14:36:07.225Z] Copying: 539/1024 [MB] (26 MBps)
[2024-11-20T14:36:08.159Z] Copying: 565/1024 [MB] (26 MBps)
[2024-11-20T14:36:09.093Z] Copying: 591/1024 [MB] (25 MBps)
[2024-11-20T14:36:10.028Z] Copying: 617/1024 [MB] (26 MBps)
[2024-11-20T14:36:10.961Z] Copying: 642/1024 [MB] (25 MBps)
[2024-11-20T14:36:11.945Z] Copying: 670/1024 [MB] (27 MBps)
[2024-11-20T14:36:13.319Z] Copying: 698/1024 [MB] (28 MBps)
[2024-11-20T14:36:14.251Z] Copying: 726/1024 [MB] (27 MBps)
[2024-11-20T14:36:15.184Z] Copying: 755/1024 [MB] (29 MBps)
[2024-11-20T14:36:16.120Z] Copying: 781/1024 [MB] (26 MBps)
[2024-11-20T14:36:17.052Z] Copying: 809/1024 [MB] (27 MBps)
[2024-11-20T14:36:17.986Z] Copying: 835/1024 [MB] (26 MBps)
[2024-11-20T14:36:18.918Z] Copying: 861/1024 [MB] (26 MBps)
[2024-11-20T14:36:20.289Z] Copying: 889/1024 [MB] (27 MBps)
[2024-11-20T14:36:21.223Z] Copying: 916/1024 [MB] (27 MBps)
[2024-11-20T14:36:22.156Z] Copying: 944/1024 [MB] (27 MBps)
[2024-11-20T14:36:23.099Z] Copying: 970/1024 [MB] (26 MBps)
[2024-11-20T14:36:24.032Z] Copying: 998/1024 [MB] (27 MBps)
[2024-11-20T14:36:24.290Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 14:36:24.137474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.137816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:25:45.308  [2024-11-20 14:36:24.137862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:25:45.308  [2024-11-20 14:36:24.137878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.137919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:25:45.308  [2024-11-20 14:36:24.141364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.141537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:25:45.308  [2024-11-20 14:36:24.141593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.417 ms
00:25:45.308  [2024-11-20 14:36:24.141609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.141864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.141893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:25:45.308  [2024-11-20 14:36:24.141909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.217 ms
00:25:45.308  [2024-11-20 14:36:24.141921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.146795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.146850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:25:45.308  [2024-11-20 14:36:24.146868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.851 ms
00:25:45.308  [2024-11-20 14:36:24.146882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.156004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.156086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:25:45.308  [2024-11-20 14:36:24.156116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.076 ms
00:25:45.308  [2024-11-20 14:36:24.156139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.188881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.188979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:25:45.308  [2024-11-20 14:36:24.189017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.617 ms
00:25:45.308  [2024-11-20 14:36:24.189037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.207689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.207763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:25:45.308  [2024-11-20 14:36:24.207784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.564 ms
00:25:45.308  [2024-11-20 14:36:24.207797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.208076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.208139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:25:45.308  [2024-11-20 14:36:24.208169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.200 ms
00:25:45.308  [2024-11-20 14:36:24.208189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.245005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.245079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:25:45.308  [2024-11-20 14:36:24.245099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 36.778 ms
00:25:45.308  [2024-11-20 14:36:24.245111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.308  [2024-11-20 14:36:24.277253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.308  [2024-11-20 14:36:24.277344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:25:45.308  [2024-11-20 14:36:24.277364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.079 ms
00:25:45.308  [2024-11-20 14:36:24.277376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.567  [2024-11-20 14:36:24.308496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.567  [2024-11-20 14:36:24.308564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:25:45.567  [2024-11-20 14:36:24.308600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.052 ms
00:25:45.567  [2024-11-20 14:36:24.308612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.567  [2024-11-20 14:36:24.339565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.567  [2024-11-20 14:36:24.339638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:25:45.567  [2024-11-20 14:36:24.339658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.822 ms
00:25:45.567  [2024-11-20 14:36:24.339670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.567  [2024-11-20 14:36:24.339734] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:25:45.567  [2024-11-20 14:36:24.339761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.339996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.567  [2024-11-20 14:36:24.340548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:25:45.568  [2024-11-20 14:36:24.340992] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:25:45.568  [2024-11-20 14:36:24.341008] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         cd5e6f69-30c0-44af-9535-aa51982d8157
00:25:45.568  [2024-11-20 14:36:24.341021] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:25:45.568  [2024-11-20 14:36:24.341031] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:25:45.568  [2024-11-20 14:36:24.341042] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:25:45.568  [2024-11-20 14:36:24.341053] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:25:45.568  [2024-11-20 14:36:24.341064] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:25:45.568  [2024-11-20 14:36:24.341075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:25:45.568  [2024-11-20 14:36:24.341101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:25:45.568  [2024-11-20 14:36:24.341111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:25:45.568  [2024-11-20 14:36:24.341122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:25:45.568  [2024-11-20 14:36:24.341133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.568  [2024-11-20 14:36:24.341144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:25:45.568  [2024-11-20 14:36:24.341157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.401 ms
00:25:45.568  [2024-11-20 14:36:24.341168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.357906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.568  [2024-11-20 14:36:24.357960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:25:45.568  [2024-11-20 14:36:24.357980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.681 ms
00:25:45.568  [2024-11-20 14:36:24.357991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.358453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:45.568  [2024-11-20 14:36:24.358483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:25:45.568  [2024-11-20 14:36:24.358498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.418 ms
00:25:45.568  [2024-11-20 14:36:24.358517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.403220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.568  [2024-11-20 14:36:24.403297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:25:45.568  [2024-11-20 14:36:24.403317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.568  [2024-11-20 14:36:24.403328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.403423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.568  [2024-11-20 14:36:24.403442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:25:45.568  [2024-11-20 14:36:24.403454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.568  [2024-11-20 14:36:24.403472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.403613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.568  [2024-11-20 14:36:24.403647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:25:45.568  [2024-11-20 14:36:24.403660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.568  [2024-11-20 14:36:24.403672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.403697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.568  [2024-11-20 14:36:24.403712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:25:45.568  [2024-11-20 14:36:24.403724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.568  [2024-11-20 14:36:24.403735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.568  [2024-11-20 14:36:24.508388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.568  [2024-11-20 14:36:24.508457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:25:45.568  [2024-11-20 14:36:24.508478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.568  [2024-11-20 14:36:24.508490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.826  [2024-11-20 14:36:24.594504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.826  [2024-11-20 14:36:24.594634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:25:45.826  [2024-11-20 14:36:24.594660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.826  [2024-11-20 14:36:24.594682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.826  [2024-11-20 14:36:24.594789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.826  [2024-11-20 14:36:24.594808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:25:45.826  [2024-11-20 14:36:24.594820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.826  [2024-11-20 14:36:24.594831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.826  [2024-11-20 14:36:24.594891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.826  [2024-11-20 14:36:24.594910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:25:45.826  [2024-11-20 14:36:24.594922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.826  [2024-11-20 14:36:24.594933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.826  [2024-11-20 14:36:24.595066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.826  [2024-11-20 14:36:24.595087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:25:45.826  [2024-11-20 14:36:24.595099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.826  [2024-11-20 14:36:24.595110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.826  [2024-11-20 14:36:24.595161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.826  [2024-11-20 14:36:24.595179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:25:45.827  [2024-11-20 14:36:24.595192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.827  [2024-11-20 14:36:24.595203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.827  [2024-11-20 14:36:24.595253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.827  [2024-11-20 14:36:24.595269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:25:45.827  [2024-11-20 14:36:24.595281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.827  [2024-11-20 14:36:24.595291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.827  [2024-11-20 14:36:24.595345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:25:45.827  [2024-11-20 14:36:24.595362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:25:45.827  [2024-11-20 14:36:24.595374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:25:45.827  [2024-11-20 14:36:24.595385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:45.827  [2024-11-20 14:36:24.595543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 458.034 ms, result 0
00:25:46.758  
00:25:46.758  
00:25:46.758   14:36:25 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:25:49.284  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:25:49.284   14:36:27 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072
00:25:49.284  [2024-11-20 14:36:27.991129] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:25:49.284  [2024-11-20 14:36:27.991320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80386 ]
00:25:49.284  [2024-11-20 14:36:28.174964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:49.542  [2024-11-20 14:36:28.280253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:49.801  [2024-11-20 14:36:28.606437] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:25:49.801  [2024-11-20 14:36:28.606790] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:25:49.801  [2024-11-20 14:36:28.768273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:49.801  [2024-11-20 14:36:28.768337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:25:49.801  [2024-11-20 14:36:28.768365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:25:49.801  [2024-11-20 14:36:28.768379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:49.801  [2024-11-20 14:36:28.768458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:49.801  [2024-11-20 14:36:28.768478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:25:49.801  [2024-11-20 14:36:28.768496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.048 ms
00:25:49.801  [2024-11-20 14:36:28.768508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:49.801  [2024-11-20 14:36:28.768542] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:25:49.801  [2024-11-20 14:36:28.769619] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:25:49.801  [2024-11-20 14:36:28.769822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:49.801  [2024-11-20 14:36:28.769856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:25:49.801  [2024-11-20 14:36:28.769884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.282 ms
00:25:49.801  [2024-11-20 14:36:28.769910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:49.801  [2024-11-20 14:36:28.771167] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:25:50.060  [2024-11-20 14:36:28.787888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.060  [2024-11-20 14:36:28.787955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:25:50.060  [2024-11-20 14:36:28.787976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.720 ms
00:25:50.060  [2024-11-20 14:36:28.787989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.060  [2024-11-20 14:36:28.788103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.060  [2024-11-20 14:36:28.788126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:25:50.060  [2024-11-20 14:36:28.788140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.041 ms
00:25:50.060  [2024-11-20 14:36:28.788152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.060  [2024-11-20 14:36:28.792771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.060  [2024-11-20 14:36:28.792834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:25:50.060  [2024-11-20 14:36:28.792853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.498 ms
00:25:50.060  [2024-11-20 14:36:28.792875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.060  [2024-11-20 14:36:28.792992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.060  [2024-11-20 14:36:28.793014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:25:50.060  [2024-11-20 14:36:28.793028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:25:50.060  [2024-11-20 14:36:28.793040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.060  [2024-11-20 14:36:28.793119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.060  [2024-11-20 14:36:28.793138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:25:50.061  [2024-11-20 14:36:28.793152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.012 ms
00:25:50.061  [2024-11-20 14:36:28.793164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.061  [2024-11-20 14:36:28.793206] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:25:50.061  [2024-11-20 14:36:28.797589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.061  [2024-11-20 14:36:28.797651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:25:50.061  [2024-11-20 14:36:28.797670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.397 ms
00:25:50.061  [2024-11-20 14:36:28.797689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.061  [2024-11-20 14:36:28.797733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.061  [2024-11-20 14:36:28.797750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:25:50.061  [2024-11-20 14:36:28.797763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:25:50.061  [2024-11-20 14:36:28.797775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.061  [2024-11-20 14:36:28.797831] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:25:50.061  [2024-11-20 14:36:28.797865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:25:50.061  [2024-11-20 14:36:28.797909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:25:50.061  [2024-11-20 14:36:28.797935] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:25:50.061  [2024-11-20 14:36:28.798050] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:25:50.061  [2024-11-20 14:36:28.798067] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:25:50.061  [2024-11-20 14:36:28.798083] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:25:50.061  [2024-11-20 14:36:28.798099] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798113] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798127] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:25:50.061  [2024-11-20 14:36:28.798138] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:25:50.061  [2024-11-20 14:36:28.798149] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:25:50.061  [2024-11-20 14:36:28.798166] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:25:50.061  [2024-11-20 14:36:28.798180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.061  [2024-11-20 14:36:28.798192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:25:50.061  [2024-11-20 14:36:28.798205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.354 ms
00:25:50.061  [2024-11-20 14:36:28.798216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.061  [2024-11-20 14:36:28.798318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.061  [2024-11-20 14:36:28.798335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:25:50.061  [2024-11-20 14:36:28.798348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:25:50.061  [2024-11-20 14:36:28.798360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.061  [2024-11-20 14:36:28.798515] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:25:50.061  [2024-11-20 14:36:28.798538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:25:50.061  [2024-11-20 14:36:28.798552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:25:50.061  [2024-11-20 14:36:28.798617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:25:50.061  [2024-11-20 14:36:28.798654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:25:50.061  [2024-11-20 14:36:28.798677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:25:50.061  [2024-11-20 14:36:28.798689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:25:50.061  [2024-11-20 14:36:28.798700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:25:50.061  [2024-11-20 14:36:28.798711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:25:50.061  [2024-11-20 14:36:28.798722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:25:50.061  [2024-11-20 14:36:28.798748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:25:50.061  [2024-11-20 14:36:28.798773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:25:50.061  [2024-11-20 14:36:28.798807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:25:50.061  [2024-11-20 14:36:28.798841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:25:50.061  [2024-11-20 14:36:28.798874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:25:50.061  [2024-11-20 14:36:28.798907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:25:50.061  [2024-11-20 14:36:28.798929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:25:50.061  [2024-11-20 14:36:28.798940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:25:50.061  [2024-11-20 14:36:28.798951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:25:50.061  [2024-11-20 14:36:28.798962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:25:50.061  [2024-11-20 14:36:28.798977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:25:50.061  [2024-11-20 14:36:28.798997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:25:50.061  [2024-11-20 14:36:28.799018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:25:50.061  [2024-11-20 14:36:28.799039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:25:50.061  [2024-11-20 14:36:28.799063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.799085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:25:50.061  [2024-11-20 14:36:28.799107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:25:50.061  [2024-11-20 14:36:28.799126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.799146] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:25:50.061  [2024-11-20 14:36:28.799165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:25:50.061  [2024-11-20 14:36:28.799179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:25:50.061  [2024-11-20 14:36:28.799191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:25:50.061  [2024-11-20 14:36:28.799203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:25:50.061  [2024-11-20 14:36:28.799215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:25:50.061  [2024-11-20 14:36:28.799231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:25:50.061  [2024-11-20 14:36:28.799252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:25:50.061  [2024-11-20 14:36:28.799273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:25:50.061  [2024-11-20 14:36:28.799296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:25:50.061  [2024-11-20 14:36:28.799322] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:25:50.061  [2024-11-20 14:36:28.799347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:25:50.061  [2024-11-20 14:36:28.799373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:25:50.061  [2024-11-20 14:36:28.799395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:25:50.061  [2024-11-20 14:36:28.799410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:25:50.061  [2024-11-20 14:36:28.799438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:25:50.061  [2024-11-20 14:36:28.799451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:25:50.061  [2024-11-20 14:36:28.799463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:25:50.061  [2024-11-20 14:36:28.799480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:25:50.061  [2024-11-20 14:36:28.799500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:25:50.061  [2024-11-20 14:36:28.799522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:25:50.061  [2024-11-20 14:36:28.799547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:25:50.061  [2024-11-20 14:36:28.799590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:25:50.061  [2024-11-20 14:36:28.799617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:25:50.062  [2024-11-20 14:36:28.799631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:25:50.062  [2024-11-20 14:36:28.799646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:25:50.062  [2024-11-20 14:36:28.799662] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:25:50.062  [2024-11-20 14:36:28.799696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:25:50.062  [2024-11-20 14:36:28.799723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:25:50.062  [2024-11-20 14:36:28.799748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:25:50.062  [2024-11-20 14:36:28.799772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:25:50.062  [2024-11-20 14:36:28.799794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:25:50.062  [2024-11-20 14:36:28.799819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.799841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:25:50.062  [2024-11-20 14:36:28.799865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.374 ms
00:25:50.062  [2024-11-20 14:36:28.799906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.833680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.833763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:25:50.062  [2024-11-20 14:36:28.833786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.673 ms
00:25:50.062  [2024-11-20 14:36:28.833799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.833928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.833946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:25:50.062  [2024-11-20 14:36:28.833959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:25:50.062  [2024-11-20 14:36:28.833972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.888931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.889004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:25:50.062  [2024-11-20 14:36:28.889027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 54.856 ms
00:25:50.062  [2024-11-20 14:36:28.889039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.889121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.889141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:25:50.062  [2024-11-20 14:36:28.889161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:25:50.062  [2024-11-20 14:36:28.889174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.889611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.889632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:25:50.062  [2024-11-20 14:36:28.889646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.328 ms
00:25:50.062  [2024-11-20 14:36:28.889658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.889820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.889841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:25:50.062  [2024-11-20 14:36:28.889864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.130 ms
00:25:50.062  [2024-11-20 14:36:28.889883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.906886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.907135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:25:50.062  [2024-11-20 14:36:28.907196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.970 ms
00:25:50.062  [2024-11-20 14:36:28.907223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.923931] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:25:50.062  [2024-11-20 14:36:28.923984] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:25:50.062  [2024-11-20 14:36:28.924007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.924021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:25:50.062  [2024-11-20 14:36:28.924036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.576 ms
00:25:50.062  [2024-11-20 14:36:28.924048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.954290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.954585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:25:50.062  [2024-11-20 14:36:28.954636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.177 ms
00:25:50.062  [2024-11-20 14:36:28.954663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.971075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.971148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:25:50.062  [2024-11-20 14:36:28.971171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.315 ms
00:25:50.062  [2024-11-20 14:36:28.971184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.987653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.987734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:25:50.062  [2024-11-20 14:36:28.987755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.394 ms
00:25:50.062  [2024-11-20 14:36:28.987768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.062  [2024-11-20 14:36:28.988749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.062  [2024-11-20 14:36:28.988965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:25:50.062  [2024-11-20 14:36:28.989011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.781 ms
00:25:50.062  [2024-11-20 14:36:28.989049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.321  [2024-11-20 14:36:29.064827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.321  [2024-11-20 14:36:29.064930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:25:50.321  [2024-11-20 14:36:29.064972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 75.723 ms
00:25:50.321  [2024-11-20 14:36:29.064985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.321  [2024-11-20 14:36:29.078501] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:25:50.321  [2024-11-20 14:36:29.081597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.321  [2024-11-20 14:36:29.081663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:25:50.321  [2024-11-20 14:36:29.081685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.507 ms
00:25:50.321  [2024-11-20 14:36:29.081697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.321  [2024-11-20 14:36:29.081851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.321  [2024-11-20 14:36:29.081874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:25:50.321  [2024-11-20 14:36:29.081889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:25:50.321  [2024-11-20 14:36:29.081907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.321  [2024-11-20 14:36:29.082007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.321  [2024-11-20 14:36:29.082027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:25:50.321  [2024-11-20 14:36:29.082040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.041 ms
00:25:50.321  [2024-11-20 14:36:29.082052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.321  [2024-11-20 14:36:29.082086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.321  [2024-11-20 14:36:29.082102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:25:50.321  [2024-11-20 14:36:29.082115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:25:50.322  [2024-11-20 14:36:29.082127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.322  [2024-11-20 14:36:29.082178] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:25:50.322  [2024-11-20 14:36:29.082196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.322  [2024-11-20 14:36:29.082208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:25:50.322  [2024-11-20 14:36:29.082221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.020 ms
00:25:50.322  [2024-11-20 14:36:29.082233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.322  [2024-11-20 14:36:29.115328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.322  [2024-11-20 14:36:29.115427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:25:50.322  [2024-11-20 14:36:29.115451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.063 ms
00:25:50.322  [2024-11-20 14:36:29.115478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.322  [2024-11-20 14:36:29.115665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:25:50.322  [2024-11-20 14:36:29.115687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:25:50.322  [2024-11-20 14:36:29.115701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.048 ms
00:25:50.322  [2024-11-20 14:36:29.115714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:25:50.322  [2024-11-20 14:36:29.117159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 348.343 ms, result 0
00:25:51.257  
[2024-11-20T14:36:31.173Z] Copying: 28/1024 [MB] (28 MBps)
[2024-11-20T14:36:32.547Z] Copying: 57/1024 [MB] (29 MBps)
[2024-11-20T14:36:33.484Z] Copying: 84/1024 [MB] (27 MBps)
[2024-11-20T14:36:34.416Z] Copying: 111/1024 [MB] (26 MBps)
[2024-11-20T14:36:35.351Z] Copying: 139/1024 [MB] (27 MBps)
[2024-11-20T14:36:36.284Z] Copying: 166/1024 [MB] (27 MBps)
[2024-11-20T14:36:37.218Z] Copying: 196/1024 [MB] (29 MBps)
[2024-11-20T14:36:38.151Z] Copying: 225/1024 [MB] (29 MBps)
[2024-11-20T14:36:39.525Z] Copying: 255/1024 [MB] (29 MBps)
[2024-11-20T14:36:40.458Z] Copying: 285/1024 [MB] (29 MBps)
[2024-11-20T14:36:41.392Z] Copying: 314/1024 [MB] (29 MBps)
[2024-11-20T14:36:42.327Z] Copying: 343/1024 [MB] (28 MBps)
[2024-11-20T14:36:43.262Z] Copying: 373/1024 [MB] (29 MBps)
[2024-11-20T14:36:44.198Z] Copying: 400/1024 [MB] (26 MBps)
[2024-11-20T14:36:45.133Z] Copying: 428/1024 [MB] (28 MBps)
[2024-11-20T14:36:46.543Z] Copying: 457/1024 [MB] (28 MBps)
[2024-11-20T14:36:47.476Z] Copying: 486/1024 [MB] (28 MBps)
[2024-11-20T14:36:48.411Z] Copying: 514/1024 [MB] (28 MBps)
[2024-11-20T14:36:49.354Z] Copying: 543/1024 [MB] (28 MBps)
[2024-11-20T14:36:50.295Z] Copying: 571/1024 [MB] (28 MBps)
[2024-11-20T14:36:51.292Z] Copying: 598/1024 [MB] (27 MBps)
[2024-11-20T14:36:52.227Z] Copying: 626/1024 [MB] (27 MBps)
[2024-11-20T14:36:53.163Z] Copying: 655/1024 [MB] (29 MBps)
[2024-11-20T14:36:54.538Z] Copying: 686/1024 [MB] (30 MBps)
[2024-11-20T14:36:55.474Z] Copying: 717/1024 [MB] (30 MBps)
[2024-11-20T14:36:56.409Z] Copying: 746/1024 [MB] (29 MBps)
[2024-11-20T14:36:57.400Z] Copying: 773/1024 [MB] (26 MBps)
[2024-11-20T14:36:58.335Z] Copying: 801/1024 [MB] (27 MBps)
[2024-11-20T14:36:59.270Z] Copying: 829/1024 [MB] (27 MBps)
[2024-11-20T14:37:00.203Z] Copying: 858/1024 [MB] (29 MBps)
[2024-11-20T14:37:01.140Z] Copying: 887/1024 [MB] (28 MBps)
[2024-11-20T14:37:02.566Z] Copying: 915/1024 [MB] (28 MBps)
[2024-11-20T14:37:03.134Z] Copying: 945/1024 [MB] (29 MBps)
[2024-11-20T14:37:04.510Z] Copying: 975/1024 [MB] (29 MBps)
[2024-11-20T14:37:05.447Z] Copying: 1004/1024 [MB] (29 MBps)
[2024-11-20T14:37:06.384Z] Copying: 1023/1024 [MB] (18 MBps)
[2024-11-20T14:37:06.384Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 14:37:06.094584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.094666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:26:27.402  [2024-11-20 14:37:06.094702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:26:27.402  [2024-11-20 14:37:06.094745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.096098] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:26:27.402  [2024-11-20 14:37:06.101658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.101708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:26:27.402  [2024-11-20 14:37:06.101738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.495 ms
00:26:27.402  [2024-11-20 14:37:06.101763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.116440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.116500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:26:27.402  [2024-11-20 14:37:06.116533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.791 ms
00:26:27.402  [2024-11-20 14:37:06.116583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.138290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.138381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:26:27.402  [2024-11-20 14:37:06.138416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.660 ms
00:26:27.402  [2024-11-20 14:37:06.138449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.145343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.145391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:26:27.402  [2024-11-20 14:37:06.145420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.752 ms
00:26:27.402  [2024-11-20 14:37:06.145444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.177264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.177331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:26:27.402  [2024-11-20 14:37:06.177364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.670 ms
00:26:27.402  [2024-11-20 14:37:06.177385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.195413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.195489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:26:27.402  [2024-11-20 14:37:06.195522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.887 ms
00:26:27.402  [2024-11-20 14:37:06.195543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.283792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.283891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:26:27.402  [2024-11-20 14:37:06.283923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 88.100 ms
00:26:27.402  [2024-11-20 14:37:06.283946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.317769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.317867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:26:27.402  [2024-11-20 14:37:06.317900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.778 ms
00:26:27.402  [2024-11-20 14:37:06.317921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.402  [2024-11-20 14:37:06.351654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.402  [2024-11-20 14:37:06.351775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:26:27.402  [2024-11-20 14:37:06.351808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.612 ms
00:26:27.402  [2024-11-20 14:37:06.351829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.662  [2024-11-20 14:37:06.385407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.662  [2024-11-20 14:37:06.385500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:26:27.662  [2024-11-20 14:37:06.385537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.445 ms
00:26:27.662  [2024-11-20 14:37:06.385558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.662  [2024-11-20 14:37:06.418198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.662  [2024-11-20 14:37:06.418277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:26:27.662  [2024-11-20 14:37:06.418310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.391 ms
00:26:27.662  [2024-11-20 14:37:06.418330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.662  [2024-11-20 14:37:06.418428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:26:27.662  [2024-11-20 14:37:06.418465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   122880 / 261120 	wr_cnt: 1	state: open
00:26:27.662  [2024-11-20 14:37:06.418491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.418995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.662  [2024-11-20 14:37:06.419260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.419994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:26:27.663  [2024-11-20 14:37:06.420808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:26:27.663  [2024-11-20 14:37:06.420852] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         cd5e6f69-30c0-44af-9535-aa51982d8157
00:26:27.663  [2024-11-20 14:37:06.420876] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    122880
00:26:27.663  [2024-11-20 14:37:06.420896] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        123840
00:26:27.663  [2024-11-20 14:37:06.420917] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         122880
00:26:27.663  [2024-11-20 14:37:06.420941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0078
00:26:27.663  [2024-11-20 14:37:06.420962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:26:27.663  [2024-11-20 14:37:06.420998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:26:27.663  [2024-11-20 14:37:06.421035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:26:27.663  [2024-11-20 14:37:06.421054] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:26:27.663  [2024-11-20 14:37:06.421073] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:26:27.663  [2024-11-20 14:37:06.421096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.663  [2024-11-20 14:37:06.421118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:26:27.663  [2024-11-20 14:37:06.421141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.671 ms
00:26:27.663  [2024-11-20 14:37:06.421162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.663  [2024-11-20 14:37:06.439980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.663  [2024-11-20 14:37:06.440049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:26:27.663  [2024-11-20 14:37:06.440094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.699 ms
00:26:27.663  [2024-11-20 14:37:06.440131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.663  [2024-11-20 14:37:06.440738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:27.663  [2024-11-20 14:37:06.440780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:26:27.663  [2024-11-20 14:37:06.440808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.537 ms
00:26:27.663  [2024-11-20 14:37:06.440831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.663  [2024-11-20 14:37:06.484543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.663  [2024-11-20 14:37:06.484642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:26:27.663  [2024-11-20 14:37:06.484674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.663  [2024-11-20 14:37:06.484694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.663  [2024-11-20 14:37:06.484811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.663  [2024-11-20 14:37:06.484838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:26:27.664  [2024-11-20 14:37:06.484860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.664  [2024-11-20 14:37:06.484879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.664  [2024-11-20 14:37:06.485090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.664  [2024-11-20 14:37:06.485131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:26:27.664  [2024-11-20 14:37:06.485175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.664  [2024-11-20 14:37:06.485197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.664  [2024-11-20 14:37:06.485237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.664  [2024-11-20 14:37:06.485270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:26:27.664  [2024-11-20 14:37:06.485293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.664  [2024-11-20 14:37:06.485314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.664  [2024-11-20 14:37:06.592471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.664  [2024-11-20 14:37:06.592550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:26:27.664  [2024-11-20 14:37:06.592608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.664  [2024-11-20 14:37:06.592629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.922  [2024-11-20 14:37:06.678743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.678820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:26:27.923  [2024-11-20 14:37:06.678853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.678872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.679017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.679046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:26:27.923  [2024-11-20 14:37:06.679065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.679101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.679176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.679205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:26:27.923  [2024-11-20 14:37:06.679226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.679255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.679447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.679480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:26:27.923  [2024-11-20 14:37:06.679504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.679525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.679644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.679676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:26:27.923  [2024-11-20 14:37:06.679699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.679720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.679792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.679819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:26:27.923  [2024-11-20 14:37:06.679841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.679861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.679953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:26:27.923  [2024-11-20 14:37:06.679982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:26:27.923  [2024-11-20 14:37:06.680006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:26:27.923  [2024-11-20 14:37:06.680028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:27.923  [2024-11-20 14:37:06.680270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 587.846 ms, result 0
00:26:29.305  
00:26:29.305  
00:26:29.305   14:37:08 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144
00:26:29.305  [2024-11-20 14:37:08.139222] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:26:29.305  [2024-11-20 14:37:08.139402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80778 ]
00:26:29.563  [2024-11-20 14:37:08.319499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:29.563  [2024-11-20 14:37:08.475772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:30.131  [2024-11-20 14:37:08.807077] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:26:30.131  [2024-11-20 14:37:08.807177] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:26:30.131  [2024-11-20 14:37:08.970315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.131  [2024-11-20 14:37:08.970383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:26:30.131  [2024-11-20 14:37:08.970409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:26:30.131  [2024-11-20 14:37:08.970421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.131  [2024-11-20 14:37:08.970489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.131  [2024-11-20 14:37:08.970507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:26:30.131  [2024-11-20 14:37:08.970524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:26:30.131  [2024-11-20 14:37:08.970535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.131  [2024-11-20 14:37:08.970581] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:26:30.131  [2024-11-20 14:37:08.971543] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:26:30.131  [2024-11-20 14:37:08.971601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.131  [2024-11-20 14:37:08.971616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:26:30.131  [2024-11-20 14:37:08.971629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.042 ms
00:26:30.131  [2024-11-20 14:37:08.971640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.131  [2024-11-20 14:37:08.972881] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:26:30.132  [2024-11-20 14:37:08.989835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.989886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:26:30.132  [2024-11-20 14:37:08.989905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.956 ms
00:26:30.132  [2024-11-20 14:37:08.989917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.989999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.990018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:26:30.132  [2024-11-20 14:37:08.990031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.025 ms
00:26:30.132  [2024-11-20 14:37:08.990042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.994486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.994538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:26:30.132  [2024-11-20 14:37:08.994555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.348 ms
00:26:30.132  [2024-11-20 14:37:08.994615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.994721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.994740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:26:30.132  [2024-11-20 14:37:08.994752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.067 ms
00:26:30.132  [2024-11-20 14:37:08.994763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.994829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.994845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:26:30.132  [2024-11-20 14:37:08.994857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.011 ms
00:26:30.132  [2024-11-20 14:37:08.994869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.994908] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:26:30.132  [2024-11-20 14:37:08.999239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.999282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:26:30.132  [2024-11-20 14:37:08.999299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.345 ms
00:26:30.132  [2024-11-20 14:37:08.999316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.999356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.999371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:26:30.132  [2024-11-20 14:37:08.999383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:26:30.132  [2024-11-20 14:37:08.999394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.999456] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:26:30.132  [2024-11-20 14:37:08.999488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:26:30.132  [2024-11-20 14:37:08.999531] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:26:30.132  [2024-11-20 14:37:08.999555] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:26:30.132  [2024-11-20 14:37:08.999693] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:26:30.132  [2024-11-20 14:37:08.999712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:26:30.132  [2024-11-20 14:37:08.999728] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:26:30.132  [2024-11-20 14:37:08.999752] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:26:30.132  [2024-11-20 14:37:08.999776] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:26:30.132  [2024-11-20 14:37:08.999798] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:26:30.132  [2024-11-20 14:37:08.999812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:26:30.132  [2024-11-20 14:37:08.999823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:26:30.132  [2024-11-20 14:37:08.999841] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:26:30.132  [2024-11-20 14:37:08.999853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:08.999865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:26:30.132  [2024-11-20 14:37:08.999877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.402 ms
00:26:30.132  [2024-11-20 14:37:08.999889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:08.999991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.132  [2024-11-20 14:37:09.000006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:26:30.132  [2024-11-20 14:37:09.000018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:26:30.132  [2024-11-20 14:37:09.000030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.132  [2024-11-20 14:37:09.000182] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:26:30.132  [2024-11-20 14:37:09.000214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:26:30.132  [2024-11-20 14:37:09.000235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:26:30.132  [2024-11-20 14:37:09.000281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:26:30.132  [2024-11-20 14:37:09.000313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:26:30.132  [2024-11-20 14:37:09.000334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:26:30.132  [2024-11-20 14:37:09.000344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:26:30.132  [2024-11-20 14:37:09.000354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:26:30.132  [2024-11-20 14:37:09.000365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:26:30.132  [2024-11-20 14:37:09.000376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:26:30.132  [2024-11-20 14:37:09.000400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:26:30.132  [2024-11-20 14:37:09.000422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:26:30.132  [2024-11-20 14:37:09.000454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:26:30.132  [2024-11-20 14:37:09.000484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:26:30.132  [2024-11-20 14:37:09.000515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:26:30.132  [2024-11-20 14:37:09.000547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:26:30.132  [2024-11-20 14:37:09.000583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:26:30.132  [2024-11-20 14:37:09.000597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:26:30.132  [2024-11-20 14:37:09.000607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:26:30.132  [2024-11-20 14:37:09.000618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:26:30.132  [2024-11-20 14:37:09.000629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:26:30.132  [2024-11-20 14:37:09.000639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:26:30.132  [2024-11-20 14:37:09.000649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:26:30.132  [2024-11-20 14:37:09.000659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:26:30.133  [2024-11-20 14:37:09.000669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:30.133  [2024-11-20 14:37:09.000680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:26:30.133  [2024-11-20 14:37:09.000690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:26:30.133  [2024-11-20 14:37:09.000700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:30.133  [2024-11-20 14:37:09.000711] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:26:30.133  [2024-11-20 14:37:09.000722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:26:30.133  [2024-11-20 14:37:09.000733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:26:30.133  [2024-11-20 14:37:09.000743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:26:30.133  [2024-11-20 14:37:09.000755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:26:30.133  [2024-11-20 14:37:09.000766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:26:30.133  [2024-11-20 14:37:09.000776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:26:30.133  [2024-11-20 14:37:09.000787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:26:30.133  [2024-11-20 14:37:09.000797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:26:30.133  [2024-11-20 14:37:09.000808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:26:30.133  [2024-11-20 14:37:09.000820] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:26:30.133  [2024-11-20 14:37:09.000834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.000846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:26:30.133  [2024-11-20 14:37:09.000858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:26:30.133  [2024-11-20 14:37:09.000869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:26:30.133  [2024-11-20 14:37:09.000880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:26:30.133  [2024-11-20 14:37:09.000892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:26:30.133  [2024-11-20 14:37:09.000904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:26:30.133  [2024-11-20 14:37:09.000915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:26:30.133  [2024-11-20 14:37:09.000926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:26:30.133  [2024-11-20 14:37:09.000937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:26:30.133  [2024-11-20 14:37:09.000948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.000959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.000970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.000981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.000993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:26:30.133  [2024-11-20 14:37:09.001004] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:26:30.133  [2024-11-20 14:37:09.001022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.001034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:26:30.133  [2024-11-20 14:37:09.001046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:26:30.133  [2024-11-20 14:37:09.001057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:26:30.133  [2024-11-20 14:37:09.001068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:26:30.133  [2024-11-20 14:37:09.001081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.001092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:26:30.133  [2024-11-20 14:37:09.001104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.971 ms
00:26:30.133  [2024-11-20 14:37:09.001115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.035172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.035252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:26:30.133  [2024-11-20 14:37:09.035277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.993 ms
00:26:30.133  [2024-11-20 14:37:09.035290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.035426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.035444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:26:30.133  [2024-11-20 14:37:09.035456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:26:30.133  [2024-11-20 14:37:09.035468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.091685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.091752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:26:30.133  [2024-11-20 14:37:09.091773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 56.120 ms
00:26:30.133  [2024-11-20 14:37:09.091785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.091866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.091883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:26:30.133  [2024-11-20 14:37:09.091903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:26:30.133  [2024-11-20 14:37:09.091915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.092316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.092347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:26:30.133  [2024-11-20 14:37:09.092361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.291 ms
00:26:30.133  [2024-11-20 14:37:09.092373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.092530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.092558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:26:30.133  [2024-11-20 14:37:09.092586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.127 ms
00:26:30.133  [2024-11-20 14:37:09.092607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.133  [2024-11-20 14:37:09.109157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.133  [2024-11-20 14:37:09.109216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:26:30.133  [2024-11-20 14:37:09.109240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.518 ms
00:26:30.133  [2024-11-20 14:37:09.109252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.404  [2024-11-20 14:37:09.125765] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0
00:26:30.404  [2024-11-20 14:37:09.125822] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:26:30.404  [2024-11-20 14:37:09.125843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.404  [2024-11-20 14:37:09.125855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:26:30.404  [2024-11-20 14:37:09.125869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.427 ms
00:26:30.404  [2024-11-20 14:37:09.125881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.404  [2024-11-20 14:37:09.158450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.404  [2024-11-20 14:37:09.158513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:26:30.404  [2024-11-20 14:37:09.158532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.510 ms
00:26:30.405  [2024-11-20 14:37:09.158544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.175436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.175498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:26:30.405  [2024-11-20 14:37:09.175517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.795 ms
00:26:30.405  [2024-11-20 14:37:09.175529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.191404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.191474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:26:30.405  [2024-11-20 14:37:09.191491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.812 ms
00:26:30.405  [2024-11-20 14:37:09.191503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.192336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.192388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:26:30.405  [2024-11-20 14:37:09.192403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.688 ms
00:26:30.405  [2024-11-20 14:37:09.192419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.271437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.271512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:26:30.405  [2024-11-20 14:37:09.271541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 78.983 ms
00:26:30.405  [2024-11-20 14:37:09.271553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.284809] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:26:30.405  [2024-11-20 14:37:09.287676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.287714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:26:30.405  [2024-11-20 14:37:09.287731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.033 ms
00:26:30.405  [2024-11-20 14:37:09.287744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.287889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.287922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:26:30.405  [2024-11-20 14:37:09.287935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:26:30.405  [2024-11-20 14:37:09.287952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.289552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.289614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:26:30.405  [2024-11-20 14:37:09.289631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.535 ms
00:26:30.405  [2024-11-20 14:37:09.289643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.289687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.289712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:26:30.405  [2024-11-20 14:37:09.289734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:26:30.405  [2024-11-20 14:37:09.289755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.289813] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:26:30.405  [2024-11-20 14:37:09.289830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.289841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:26:30.405  [2024-11-20 14:37:09.289854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:26:30.405  [2024-11-20 14:37:09.289865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.321859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.321906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:26:30.405  [2024-11-20 14:37:09.321924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.965 ms
00:26:30.405  [2024-11-20 14:37:09.321944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.322035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:26:30.405  [2024-11-20 14:37:09.322054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:26:30.405  [2024-11-20 14:37:09.322067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.043 ms
00:26:30.405  [2024-11-20 14:37:09.322078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:26:30.405  [2024-11-20 14:37:09.323312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 352.434 ms, result 0
00:26:31.783  
[2024-11-20T14:37:11.700Z] Copying: 25/1024 [MB] (25 MBps)
[2024-11-20T14:37:12.636Z] Copying: 51/1024 [MB] (25 MBps)
[2024-11-20T14:37:13.620Z] Copying: 77/1024 [MB] (26 MBps)
[2024-11-20T14:37:14.994Z] Copying: 103/1024 [MB] (26 MBps)
[2024-11-20T14:37:15.928Z] Copying: 128/1024 [MB] (24 MBps)
[2024-11-20T14:37:16.859Z] Copying: 152/1024 [MB] (24 MBps)
[2024-11-20T14:37:17.793Z] Copying: 180/1024 [MB] (27 MBps)
[2024-11-20T14:37:18.726Z] Copying: 207/1024 [MB] (27 MBps)
[2024-11-20T14:37:19.755Z] Copying: 233/1024 [MB] (26 MBps)
[2024-11-20T14:37:20.691Z] Copying: 261/1024 [MB] (27 MBps)
[2024-11-20T14:37:21.626Z] Copying: 288/1024 [MB] (26 MBps)
[2024-11-20T14:37:22.562Z] Copying: 313/1024 [MB] (25 MBps)
[2024-11-20T14:37:23.935Z] Copying: 341/1024 [MB] (27 MBps)
[2024-11-20T14:37:24.870Z] Copying: 368/1024 [MB] (27 MBps)
[2024-11-20T14:37:25.805Z] Copying: 396/1024 [MB] (27 MBps)
[2024-11-20T14:37:26.739Z] Copying: 425/1024 [MB] (28 MBps)
[2024-11-20T14:37:27.672Z] Copying: 450/1024 [MB] (25 MBps)
[2024-11-20T14:37:28.607Z] Copying: 476/1024 [MB] (25 MBps)
[2024-11-20T14:37:29.983Z] Copying: 503/1024 [MB] (27 MBps)
[2024-11-20T14:37:30.916Z] Copying: 526/1024 [MB] (22 MBps)
[2024-11-20T14:37:31.851Z] Copying: 552/1024 [MB] (26 MBps)
[2024-11-20T14:37:32.786Z] Copying: 578/1024 [MB] (26 MBps)
[2024-11-20T14:37:33.720Z] Copying: 604/1024 [MB] (25 MBps)
[2024-11-20T14:37:34.655Z] Copying: 631/1024 [MB] (26 MBps)
[2024-11-20T14:37:35.590Z] Copying: 659/1024 [MB] (28 MBps)
[2024-11-20T14:37:36.964Z] Copying: 687/1024 [MB] (27 MBps)
[2024-11-20T14:37:37.902Z] Copying: 713/1024 [MB] (26 MBps)
[2024-11-20T14:37:38.837Z] Copying: 737/1024 [MB] (24 MBps)
[2024-11-20T14:37:39.778Z] Copying: 764/1024 [MB] (26 MBps)
[2024-11-20T14:37:40.754Z] Copying: 793/1024 [MB] (28 MBps)
[2024-11-20T14:37:41.688Z] Copying: 819/1024 [MB] (26 MBps)
[2024-11-20T14:37:42.622Z] Copying: 847/1024 [MB] (27 MBps)
[2024-11-20T14:37:43.993Z] Copying: 872/1024 [MB] (25 MBps)
[2024-11-20T14:37:44.927Z] Copying: 901/1024 [MB] (28 MBps)
[2024-11-20T14:37:45.863Z] Copying: 927/1024 [MB] (26 MBps)
[2024-11-20T14:37:46.797Z] Copying: 954/1024 [MB] (27 MBps)
[2024-11-20T14:37:47.730Z] Copying: 981/1024 [MB] (26 MBps)
[2024-11-20T14:37:48.296Z] Copying: 1008/1024 [MB] (26 MBps)
[2024-11-20T14:37:48.554Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 14:37:48.352417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.352493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:27:09.572  [2024-11-20 14:37:48.352516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:27:09.572  [2024-11-20 14:37:48.352545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.352600] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:27:09.572  [2024-11-20 14:37:48.356639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.356677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:27:09.572  [2024-11-20 14:37:48.356695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.009 ms
00:27:09.572  [2024-11-20 14:37:48.356709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.357003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.357025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:27:09.572  [2024-11-20 14:37:48.357041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.261 ms
00:27:09.572  [2024-11-20 14:37:48.357055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.362073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.362120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:27:09.572  [2024-11-20 14:37:48.362139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.988 ms
00:27:09.572  [2024-11-20 14:37:48.362155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.371438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.371489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:27:09.572  [2024-11-20 14:37:48.371508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 9.231 ms
00:27:09.572  [2024-11-20 14:37:48.371522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.410920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.410993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:27:09.572  [2024-11-20 14:37:48.411016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.264 ms
00:27:09.572  [2024-11-20 14:37:48.411030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.432577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.432658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:27:09.572  [2024-11-20 14:37:48.432681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 21.453 ms
00:27:09.572  [2024-11-20 14:37:48.432696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.572  [2024-11-20 14:37:48.515819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.572  [2024-11-20 14:37:48.515898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:27:09.572  [2024-11-20 14:37:48.515921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 83.025 ms
00:27:09.572  [2024-11-20 14:37:48.515939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.832  [2024-11-20 14:37:48.554850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.832  [2024-11-20 14:37:48.554913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:27:09.832  [2024-11-20 14:37:48.554935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.881 ms
00:27:09.832  [2024-11-20 14:37:48.554949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.832  [2024-11-20 14:37:48.593118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.832  [2024-11-20 14:37:48.593182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:27:09.832  [2024-11-20 14:37:48.593224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 38.102 ms
00:27:09.832  [2024-11-20 14:37:48.593239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.832  [2024-11-20 14:37:48.630646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.832  [2024-11-20 14:37:48.630706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:27:09.832  [2024-11-20 14:37:48.630724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 37.340 ms
00:27:09.832  [2024-11-20 14:37:48.630736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.832  [2024-11-20 14:37:48.661881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.832  [2024-11-20 14:37:48.661940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:27:09.832  [2024-11-20 14:37:48.661958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.030 ms
00:27:09.832  [2024-11-20 14:37:48.661970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.832  [2024-11-20 14:37:48.662024] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:27:09.832  [2024-11-20 14:37:48.662049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   131072 / 261120 	wr_cnt: 1	state: open
00:27:09.832  [2024-11-20 14:37:48.662064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.832  [2024-11-20 14:37:48.662256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.662998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.833  [2024-11-20 14:37:48.663175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:27:09.834  [2024-11-20 14:37:48.663314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:27:09.834  [2024-11-20 14:37:48.663326] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         cd5e6f69-30c0-44af-9535-aa51982d8157
00:27:09.834  [2024-11-20 14:37:48.663338] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    131072
00:27:09.834  [2024-11-20 14:37:48.663349] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        9152
00:27:09.834  [2024-11-20 14:37:48.663360] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         8192
00:27:09.834  [2024-11-20 14:37:48.663372] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.1172
00:27:09.834  [2024-11-20 14:37:48.663383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:27:09.834  [2024-11-20 14:37:48.663404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:27:09.834  [2024-11-20 14:37:48.663425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:27:09.834  [2024-11-20 14:37:48.663462] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:27:09.834  [2024-11-20 14:37:48.663474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:27:09.834  [2024-11-20 14:37:48.663486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.834  [2024-11-20 14:37:48.663497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:27:09.834  [2024-11-20 14:37:48.663510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.463 ms
00:27:09.834  [2024-11-20 14:37:48.663521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.834  [2024-11-20 14:37:48.680712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.834  [2024-11-20 14:37:48.680776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:27:09.834  [2024-11-20 14:37:48.680797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.094 ms
00:27:09.834  [2024-11-20 14:37:48.680823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.834  [2024-11-20 14:37:48.681294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:09.834  [2024-11-20 14:37:48.681315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:27:09.834  [2024-11-20 14:37:48.681329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.422 ms
00:27:09.834  [2024-11-20 14:37:48.681341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.834  [2024-11-20 14:37:48.725584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.834  [2024-11-20 14:37:48.725658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:09.834  [2024-11-20 14:37:48.725675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.834  [2024-11-20 14:37:48.725687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.834  [2024-11-20 14:37:48.725769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.834  [2024-11-20 14:37:48.725786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:09.834  [2024-11-20 14:37:48.725798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.834  [2024-11-20 14:37:48.725809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.834  [2024-11-20 14:37:48.725906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.834  [2024-11-20 14:37:48.725925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:09.834  [2024-11-20 14:37:48.725945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.834  [2024-11-20 14:37:48.725956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:09.834  [2024-11-20 14:37:48.725978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:09.834  [2024-11-20 14:37:48.725992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:09.834  [2024-11-20 14:37:48.726003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:09.834  [2024-11-20 14:37:48.726014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.834100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.834171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:10.093  [2024-11-20 14:37:48.834203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.834215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.921071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:10.093  [2024-11-20 14:37:48.921091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.921103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.921227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:10.093  [2024-11-20 14:37:48.921240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.921262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.921325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:10.093  [2024-11-20 14:37:48.921336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.921347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.921493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:10.093  [2024-11-20 14:37:48.921505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.921516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.921612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:27:10.093  [2024-11-20 14:37:48.921624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.921636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.093  [2024-11-20 14:37:48.921696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:10.093  [2024-11-20 14:37:48.921708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.093  [2024-11-20 14:37:48.921720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.093  [2024-11-20 14:37:48.921778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:27:10.094  [2024-11-20 14:37:48.921794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:10.094  [2024-11-20 14:37:48.921806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:27:10.094  [2024-11-20 14:37:48.921817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:10.094  [2024-11-20 14:37:48.921958] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.511 ms, result 0
00:27:11.041  
00:27:11.041  
00:27:11.041   14:37:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:27:13.571  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:27:13.571  Process with pid 79346 is not found
00:27:13.571  Remove shared memory files
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79346
00:27:13.571   14:37:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79346 ']'
00:27:13.571   14:37:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79346
00:27:13.571  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79346) - No such process
00:27:13.571   14:37:52 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79346 is not found'
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:27:13.571   14:37:52 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f
00:27:13.571  
00:27:13.571  real	3m5.783s
00:27:13.571  user	2m50.906s
00:27:13.571  sys	0m17.912s
00:27:13.571   14:37:52 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:13.571   14:37:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x
00:27:13.571  ************************************
00:27:13.571  END TEST ftl_restore
00:27:13.571  ************************************
00:27:13.571   14:37:52 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0
00:27:13.571   14:37:52 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:27:13.571   14:37:52 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:13.571   14:37:52 ftl -- common/autotest_common.sh@10 -- # set +x
00:27:13.571  ************************************
00:27:13.571  START TEST ftl_dirty_shutdown
00:27:13.571  ************************************
00:27:13.571   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0
00:27:13.571  * Looking for test storage...
00:27:13.571  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:27:13.571     14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version
00:27:13.571     14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:27:13.571    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:27:13.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.572  		--rc genhtml_branch_coverage=1
00:27:13.572  		--rc genhtml_function_coverage=1
00:27:13.572  		--rc genhtml_legend=1
00:27:13.572  		--rc geninfo_all_blocks=1
00:27:13.572  		--rc geninfo_unexecuted_blocks=1
00:27:13.572  		
00:27:13.572  		'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:27:13.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.572  		--rc genhtml_branch_coverage=1
00:27:13.572  		--rc genhtml_function_coverage=1
00:27:13.572  		--rc genhtml_legend=1
00:27:13.572  		--rc geninfo_all_blocks=1
00:27:13.572  		--rc geninfo_unexecuted_blocks=1
00:27:13.572  		
00:27:13.572  		'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:27:13.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.572  		--rc genhtml_branch_coverage=1
00:27:13.572  		--rc genhtml_function_coverage=1
00:27:13.572  		--rc genhtml_legend=1
00:27:13.572  		--rc geninfo_all_blocks=1
00:27:13.572  		--rc geninfo_unexecuted_blocks=1
00:27:13.572  		
00:27:13.572  		'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:27:13.572  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:13.572  		--rc genhtml_branch_coverage=1
00:27:13.572  		--rc genhtml_function_coverage=1
00:27:13.572  		--rc genhtml_legend=1
00:27:13.572  		--rc geninfo_all_blocks=1
00:27:13.572  		--rc geninfo_unexecuted_blocks=1
00:27:13.572  		
00:27:13.572  		'
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:27:13.572      14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:27:13.572     14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid=
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:13.572    14:37:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81281
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81281
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81281 ']'
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:13.572  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:13.572   14:37:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x
00:27:13.832  [2024-11-20 14:37:52.650441] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:27:13.832  [2024-11-20 14:37:52.650665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81281 ]
00:27:14.099  [2024-11-20 14:37:52.851560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:14.099  [2024-11-20 14:37:52.959994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:15.034   14:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:15.034   14:37:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0
00:27:15.034    14:37:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424
00:27:15.034    14:37:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0
00:27:15.034    14:37:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:27:15.034    14:37:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424
00:27:15.034    14:37:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev
00:27:15.034     14:37:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:27:15.293    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1
00:27:15.293    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size
00:27:15.293     14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1
00:27:15.293     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1
00:27:15.293     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:27:15.293     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:27:15.293     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:27:15.293      14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:27:15.552    {
00:27:15.552      "name": "nvme0n1",
00:27:15.552      "aliases": [
00:27:15.552        "fc8cc216-f22e-4898-98e5-4a99e35531c0"
00:27:15.552      ],
00:27:15.552      "product_name": "NVMe disk",
00:27:15.552      "block_size": 4096,
00:27:15.552      "num_blocks": 1310720,
00:27:15.552      "uuid": "fc8cc216-f22e-4898-98e5-4a99e35531c0",
00:27:15.552      "numa_id": -1,
00:27:15.552      "assigned_rate_limits": {
00:27:15.552        "rw_ios_per_sec": 0,
00:27:15.552        "rw_mbytes_per_sec": 0,
00:27:15.552        "r_mbytes_per_sec": 0,
00:27:15.552        "w_mbytes_per_sec": 0
00:27:15.552      },
00:27:15.552      "claimed": true,
00:27:15.552      "claim_type": "read_many_write_one",
00:27:15.552      "zoned": false,
00:27:15.552      "supported_io_types": {
00:27:15.552        "read": true,
00:27:15.552        "write": true,
00:27:15.552        "unmap": true,
00:27:15.552        "flush": true,
00:27:15.552        "reset": true,
00:27:15.552        "nvme_admin": true,
00:27:15.552        "nvme_io": true,
00:27:15.552        "nvme_io_md": false,
00:27:15.552        "write_zeroes": true,
00:27:15.552        "zcopy": false,
00:27:15.552        "get_zone_info": false,
00:27:15.552        "zone_management": false,
00:27:15.552        "zone_append": false,
00:27:15.552        "compare": true,
00:27:15.552        "compare_and_write": false,
00:27:15.552        "abort": true,
00:27:15.552        "seek_hole": false,
00:27:15.552        "seek_data": false,
00:27:15.552        "copy": true,
00:27:15.552        "nvme_iov_md": false
00:27:15.552      },
00:27:15.552      "driver_specific": {
00:27:15.552        "nvme": [
00:27:15.552          {
00:27:15.552            "pci_address": "0000:00:11.0",
00:27:15.552            "trid": {
00:27:15.552              "trtype": "PCIe",
00:27:15.552              "traddr": "0000:00:11.0"
00:27:15.552            },
00:27:15.552            "ctrlr_data": {
00:27:15.552              "cntlid": 0,
00:27:15.552              "vendor_id": "0x1b36",
00:27:15.552              "model_number": "QEMU NVMe Ctrl",
00:27:15.552              "serial_number": "12341",
00:27:15.552              "firmware_revision": "8.0.0",
00:27:15.552              "subnqn": "nqn.2019-08.org.qemu:12341",
00:27:15.552              "oacs": {
00:27:15.552                "security": 0,
00:27:15.552                "format": 1,
00:27:15.552                "firmware": 0,
00:27:15.552                "ns_manage": 1
00:27:15.552              },
00:27:15.552              "multi_ctrlr": false,
00:27:15.552              "ana_reporting": false
00:27:15.552            },
00:27:15.552            "vs": {
00:27:15.552              "nvme_version": "1.4"
00:27:15.552            },
00:27:15.552            "ns_data": {
00:27:15.552              "id": 1,
00:27:15.552              "can_share": false
00:27:15.552            }
00:27:15.552          }
00:27:15.552        ],
00:27:15.552        "mp_policy": "active_passive"
00:27:15.552      }
00:27:15.552    }
00:27:15.552  ]'
00:27:15.552      14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:27:15.552      14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120
00:27:15.552    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120
00:27:15.552    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]]
00:27:15.552    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:27:15.552     14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:27:15.811    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=2edefd5e-867e-4362-bcac-047aadb98384
00:27:15.811    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores
00:27:15.811    14:37:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2edefd5e-867e-4362-bcac-047aadb98384
00:27:16.378     14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs
00:27:16.636    14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=d27189a1-926c-49cd-b173-4d37645cd761
00:27:16.636    14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d27189a1-926c-49cd-b173-4d37645cd761
00:27:16.894   14:37:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:16.894   14:37:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']'
00:27:16.894    14:37:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:16.894    14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0
00:27:16.894    14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:27:16.894    14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:16.894    14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size=
00:27:16.894     14:37:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:16.894     14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:16.894     14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:27:16.894     14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:27:16.894     14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:27:16.894      14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:17.153     14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:27:17.153    {
00:27:17.153      "name": "26c14718-6fd8-4f7d-abbe-120e27f53182",
00:27:17.153      "aliases": [
00:27:17.153        "lvs/nvme0n1p0"
00:27:17.153      ],
00:27:17.153      "product_name": "Logical Volume",
00:27:17.153      "block_size": 4096,
00:27:17.153      "num_blocks": 26476544,
00:27:17.153      "uuid": "26c14718-6fd8-4f7d-abbe-120e27f53182",
00:27:17.153      "assigned_rate_limits": {
00:27:17.153        "rw_ios_per_sec": 0,
00:27:17.153        "rw_mbytes_per_sec": 0,
00:27:17.153        "r_mbytes_per_sec": 0,
00:27:17.153        "w_mbytes_per_sec": 0
00:27:17.153      },
00:27:17.153      "claimed": false,
00:27:17.153      "zoned": false,
00:27:17.153      "supported_io_types": {
00:27:17.153        "read": true,
00:27:17.153        "write": true,
00:27:17.153        "unmap": true,
00:27:17.153        "flush": false,
00:27:17.153        "reset": true,
00:27:17.153        "nvme_admin": false,
00:27:17.153        "nvme_io": false,
00:27:17.153        "nvme_io_md": false,
00:27:17.153        "write_zeroes": true,
00:27:17.153        "zcopy": false,
00:27:17.153        "get_zone_info": false,
00:27:17.153        "zone_management": false,
00:27:17.153        "zone_append": false,
00:27:17.153        "compare": false,
00:27:17.153        "compare_and_write": false,
00:27:17.153        "abort": false,
00:27:17.153        "seek_hole": true,
00:27:17.153        "seek_data": true,
00:27:17.153        "copy": false,
00:27:17.153        "nvme_iov_md": false
00:27:17.153      },
00:27:17.153      "driver_specific": {
00:27:17.153        "lvol": {
00:27:17.153          "lvol_store_uuid": "d27189a1-926c-49cd-b173-4d37645cd761",
00:27:17.153          "base_bdev": "nvme0n1",
00:27:17.153          "thin_provision": true,
00:27:17.153          "num_allocated_clusters": 0,
00:27:17.153          "snapshot": false,
00:27:17.153          "clone": false,
00:27:17.153          "esnap_clone": false
00:27:17.153        }
00:27:17.153      }
00:27:17.153    }
00:27:17.153  ]'
00:27:17.153      14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:27:17.153     14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:27:17.153      14:37:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:27:17.153     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:27:17.153     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:27:17.153     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:27:17.153    14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171
00:27:17.153    14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev
00:27:17.153     14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0
00:27:17.720    14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1
00:27:17.720    14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]]
00:27:17.720     14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:17.721     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:17.721     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:27:17.721     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:27:17.721     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:27:17.721      14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:17.979     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:27:17.979    {
00:27:17.979      "name": "26c14718-6fd8-4f7d-abbe-120e27f53182",
00:27:17.979      "aliases": [
00:27:17.979        "lvs/nvme0n1p0"
00:27:17.979      ],
00:27:17.979      "product_name": "Logical Volume",
00:27:17.979      "block_size": 4096,
00:27:17.979      "num_blocks": 26476544,
00:27:17.979      "uuid": "26c14718-6fd8-4f7d-abbe-120e27f53182",
00:27:17.979      "assigned_rate_limits": {
00:27:17.979        "rw_ios_per_sec": 0,
00:27:17.979        "rw_mbytes_per_sec": 0,
00:27:17.979        "r_mbytes_per_sec": 0,
00:27:17.979        "w_mbytes_per_sec": 0
00:27:17.979      },
00:27:17.979      "claimed": false,
00:27:17.979      "zoned": false,
00:27:17.979      "supported_io_types": {
00:27:17.979        "read": true,
00:27:17.979        "write": true,
00:27:17.979        "unmap": true,
00:27:17.979        "flush": false,
00:27:17.979        "reset": true,
00:27:17.979        "nvme_admin": false,
00:27:17.979        "nvme_io": false,
00:27:17.979        "nvme_io_md": false,
00:27:17.979        "write_zeroes": true,
00:27:17.979        "zcopy": false,
00:27:17.979        "get_zone_info": false,
00:27:17.979        "zone_management": false,
00:27:17.979        "zone_append": false,
00:27:17.979        "compare": false,
00:27:17.979        "compare_and_write": false,
00:27:17.979        "abort": false,
00:27:17.979        "seek_hole": true,
00:27:17.979        "seek_data": true,
00:27:17.979        "copy": false,
00:27:17.979        "nvme_iov_md": false
00:27:17.979      },
00:27:17.979      "driver_specific": {
00:27:17.979        "lvol": {
00:27:17.979          "lvol_store_uuid": "d27189a1-926c-49cd-b173-4d37645cd761",
00:27:17.979          "base_bdev": "nvme0n1",
00:27:17.979          "thin_provision": true,
00:27:17.979          "num_allocated_clusters": 0,
00:27:17.979          "snapshot": false,
00:27:17.979          "clone": false,
00:27:17.979          "esnap_clone": false
00:27:17.979        }
00:27:17.979      }
00:27:17.979    }
00:27:17.979  ]'
00:27:17.979      14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:27:17.979     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:27:17.979      14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:27:17.979     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:27:17.979     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:27:17.979     14:37:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:27:17.979    14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171
00:27:17.979    14:37:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1
00:27:18.237   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0
00:27:18.237    14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:18.237    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:18.237    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:27:18.237    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:27:18.237    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:27:18.237     14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 26c14718-6fd8-4f7d-abbe-120e27f53182
00:27:18.496    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:27:18.496    {
00:27:18.496      "name": "26c14718-6fd8-4f7d-abbe-120e27f53182",
00:27:18.496      "aliases": [
00:27:18.496        "lvs/nvme0n1p0"
00:27:18.496      ],
00:27:18.496      "product_name": "Logical Volume",
00:27:18.496      "block_size": 4096,
00:27:18.496      "num_blocks": 26476544,
00:27:18.496      "uuid": "26c14718-6fd8-4f7d-abbe-120e27f53182",
00:27:18.496      "assigned_rate_limits": {
00:27:18.496        "rw_ios_per_sec": 0,
00:27:18.496        "rw_mbytes_per_sec": 0,
00:27:18.496        "r_mbytes_per_sec": 0,
00:27:18.496        "w_mbytes_per_sec": 0
00:27:18.496      },
00:27:18.496      "claimed": false,
00:27:18.496      "zoned": false,
00:27:18.496      "supported_io_types": {
00:27:18.496        "read": true,
00:27:18.496        "write": true,
00:27:18.496        "unmap": true,
00:27:18.496        "flush": false,
00:27:18.496        "reset": true,
00:27:18.496        "nvme_admin": false,
00:27:18.496        "nvme_io": false,
00:27:18.496        "nvme_io_md": false,
00:27:18.496        "write_zeroes": true,
00:27:18.496        "zcopy": false,
00:27:18.496        "get_zone_info": false,
00:27:18.496        "zone_management": false,
00:27:18.496        "zone_append": false,
00:27:18.496        "compare": false,
00:27:18.496        "compare_and_write": false,
00:27:18.496        "abort": false,
00:27:18.496        "seek_hole": true,
00:27:18.496        "seek_data": true,
00:27:18.496        "copy": false,
00:27:18.496        "nvme_iov_md": false
00:27:18.496      },
00:27:18.496      "driver_specific": {
00:27:18.496        "lvol": {
00:27:18.496          "lvol_store_uuid": "d27189a1-926c-49cd-b173-4d37645cd761",
00:27:18.496          "base_bdev": "nvme0n1",
00:27:18.496          "thin_provision": true,
00:27:18.496          "num_allocated_clusters": 0,
00:27:18.496          "snapshot": false,
00:27:18.496          "clone": false,
00:27:18.496          "esnap_clone": false
00:27:18.496        }
00:27:18.496      }
00:27:18.496    }
00:27:18.496  ]'
00:27:18.496     14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:27:18.496    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:27:18.496     14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:27:18.754    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544
00:27:18.754    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424
00:27:18.754    14:37:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424
00:27:18.754   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10
00:27:18.754   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 26c14718-6fd8-4f7d-abbe-120e27f53182 --l2p_dram_limit 10'
00:27:18.754   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']'
00:27:18.754   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']'
00:27:18.754   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0'
00:27:18.754   14:37:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 26c14718-6fd8-4f7d-abbe-120e27f53182 --l2p_dram_limit 10 -c nvc0n1p0
00:27:19.014  [2024-11-20 14:37:57.779800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.779871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:27:19.014  [2024-11-20 14:37:57.779900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:27:19.014  [2024-11-20 14:37:57.779913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.780004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.780026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:27:19.014  [2024-11-20 14:37:57.780042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.060 ms
00:27:19.014  [2024-11-20 14:37:57.780054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.780086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:27:19.014  [2024-11-20 14:37:57.781130] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:27:19.014  [2024-11-20 14:37:57.781171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.781186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:27:19.014  [2024-11-20 14:37:57.781201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.090 ms
00:27:19.014  [2024-11-20 14:37:57.781214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.781358] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 016b7f42-3b0a-4c4c-8075-b33620d527e3
00:27:19.014  [2024-11-20 14:37:57.782468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.782516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Default-initialize superblock
00:27:19.014  [2024-11-20 14:37:57.782534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.023 ms
00:27:19.014  [2024-11-20 14:37:57.782560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.787592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.787655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:27:19.014  [2024-11-20 14:37:57.787673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.935 ms
00:27:19.014  [2024-11-20 14:37:57.787688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.787823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.787847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:27:19.014  [2024-11-20 14:37:57.787861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.091 ms
00:27:19.014  [2024-11-20 14:37:57.787879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.787968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.787992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:27:19.014  [2024-11-20 14:37:57.788009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.014 ms
00:27:19.014  [2024-11-20 14:37:57.788023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.788056] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:27:19.014  [2024-11-20 14:37:57.792773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.792817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:27:19.014  [2024-11-20 14:37:57.792838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.721 ms
00:27:19.014  [2024-11-20 14:37:57.792850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.792900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.792916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:27:19.014  [2024-11-20 14:37:57.792932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:27:19.014  [2024-11-20 14:37:57.792944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.793010] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1
00:27:19.014  [2024-11-20 14:37:57.793181] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:27:19.014  [2024-11-20 14:37:57.793207] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:27:19.014  [2024-11-20 14:37:57.793224] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:27:19.014  [2024-11-20 14:37:57.793241] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:27:19.014  [2024-11-20 14:37:57.793256] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:27:19.014  [2024-11-20 14:37:57.793271] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:27:19.014  [2024-11-20 14:37:57.793294] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:27:19.014  [2024-11-20 14:37:57.793307] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:27:19.014  [2024-11-20 14:37:57.793319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:27:19.014  [2024-11-20 14:37:57.793333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.793346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:27:19.014  [2024-11-20 14:37:57.793360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.327 ms
00:27:19.014  [2024-11-20 14:37:57.793389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.793490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.014  [2024-11-20 14:37:57.793507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:27:19.014  [2024-11-20 14:37:57.793522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:27:19.014  [2024-11-20 14:37:57.793534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.014  [2024-11-20 14:37:57.793687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:27:19.015  [2024-11-20 14:37:57.793710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:27:19.015  [2024-11-20 14:37:57.793726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:27:19.015  [2024-11-20 14:37:57.793739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.793753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:27:19.015  [2024-11-20 14:37:57.793765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.793778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:27:19.015  [2024-11-20 14:37:57.793789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:27:19.015  [2024-11-20 14:37:57.793802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:27:19.015  [2024-11-20 14:37:57.793813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:27:19.015  [2024-11-20 14:37:57.793836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:27:19.015  [2024-11-20 14:37:57.793847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:27:19.015  [2024-11-20 14:37:57.793859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:27:19.015  [2024-11-20 14:37:57.793870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:27:19.015  [2024-11-20 14:37:57.793883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:27:19.015  [2024-11-20 14:37:57.793894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.793912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:27:19.015  [2024-11-20 14:37:57.793924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:27:19.015  [2024-11-20 14:37:57.793937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.793948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:27:19.015  [2024-11-20 14:37:57.793961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:27:19.015  [2024-11-20 14:37:57.793972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.015  [2024-11-20 14:37:57.793985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:27:19.015  [2024-11-20 14:37:57.793996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.015  [2024-11-20 14:37:57.794020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:27:19.015  [2024-11-20 14:37:57.794033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.015  [2024-11-20 14:37:57.794057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:27:19.015  [2024-11-20 14:37:57.794068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:27:19.015  [2024-11-20 14:37:57.794091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:27:19.015  [2024-11-20 14:37:57.794106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:27:19.015  [2024-11-20 14:37:57.794130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:27:19.015  [2024-11-20 14:37:57.794141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:27:19.015  [2024-11-20 14:37:57.794154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:27:19.015  [2024-11-20 14:37:57.794165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:27:19.015  [2024-11-20 14:37:57.794179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:27:19.015  [2024-11-20 14:37:57.794190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:27:19.015  [2024-11-20 14:37:57.794215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:27:19.015  [2024-11-20 14:37:57.794229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794240] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:27:19.015  [2024-11-20 14:37:57.794256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:27:19.015  [2024-11-20 14:37:57.794269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:27:19.015  [2024-11-20 14:37:57.794282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:27:19.015  [2024-11-20 14:37:57.794295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:27:19.015  [2024-11-20 14:37:57.794310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:27:19.015  [2024-11-20 14:37:57.794321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:27:19.015  [2024-11-20 14:37:57.794335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:27:19.015  [2024-11-20 14:37:57.794346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:27:19.015  [2024-11-20 14:37:57.794359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:27:19.015  [2024-11-20 14:37:57.794375] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:27:19.015  [2024-11-20 14:37:57.794395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:27:19.015  [2024-11-20 14:37:57.794422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:27:19.015  [2024-11-20 14:37:57.794434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:27:19.015  [2024-11-20 14:37:57.794448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:27:19.015  [2024-11-20 14:37:57.794460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:27:19.015  [2024-11-20 14:37:57.794474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:27:19.015  [2024-11-20 14:37:57.794486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:27:19.015  [2024-11-20 14:37:57.794500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:27:19.015  [2024-11-20 14:37:57.794511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:27:19.015  [2024-11-20 14:37:57.794527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:27:19.015  [2024-11-20 14:37:57.794613] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:27:19.015  [2024-11-20 14:37:57.794628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:27:19.015  [2024-11-20 14:37:57.794655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:27:19.015  [2024-11-20 14:37:57.794667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:27:19.015  [2024-11-20 14:37:57.794682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:27:19.015  [2024-11-20 14:37:57.794695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:19.015  [2024-11-20 14:37:57.794709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:27:19.015  [2024-11-20 14:37:57.794722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.093 ms
00:27:19.015  [2024-11-20 14:37:57.794735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:19.015  [2024-11-20 14:37:57.794790] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while.
00:27:19.015  [2024-11-20 14:37:57.794812] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks
00:27:20.940  [2024-11-20 14:37:59.796432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.796760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Scrub NV cache
00:27:20.940  [2024-11-20 14:37:59.796796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2001.652 ms
00:27:20.940  [2024-11-20 14:37:59.796814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.830728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.830806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:27:20.940  [2024-11-20 14:37:59.830829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.629 ms
00:27:20.940  [2024-11-20 14:37:59.830844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.831047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.831073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:27:20.940  [2024-11-20 14:37:59.831092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:27:20.940  [2024-11-20 14:37:59.831108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.872577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.872660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:27:20.940  [2024-11-20 14:37:59.872682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 41.398 ms
00:27:20.940  [2024-11-20 14:37:59.872697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.872770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.872790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:27:20.940  [2024-11-20 14:37:59.872804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:27:20.940  [2024-11-20 14:37:59.872818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.873262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.873290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:27:20.940  [2024-11-20 14:37:59.873305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.320 ms
00:27:20.940  [2024-11-20 14:37:59.873319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.873459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.873480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:27:20.940  [2024-11-20 14:37:59.873494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.113 ms
00:27:20.940  [2024-11-20 14:37:59.873510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:20.940  [2024-11-20 14:37:59.892093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:20.940  [2024-11-20 14:37:59.892422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:27:20.940  [2024-11-20 14:37:59.892456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.550 ms
00:27:20.940  [2024-11-20 14:37:59.892474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:37:59.928652] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:27:21.199  [2024-11-20 14:37:59.931915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:37:59.931983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:27:21.199  [2024-11-20 14:37:59.932014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 39.234 ms
00:27:21.199  [2024-11-20 14:37:59.932028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:37:59.990739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:37:59.990827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear L2P
00:27:21.199  [2024-11-20 14:37:59.990853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 58.631 ms
00:27:21.199  [2024-11-20 14:37:59.990866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:37:59.991121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:37:59.991143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:27:21.199  [2024-11-20 14:37:59.991163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.171 ms
00:27:21.199  [2024-11-20 14:37:59.991175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:38:00.023390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:38:00.023473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial band info metadata
00:27:21.199  [2024-11-20 14:38:00.023499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.111 ms
00:27:21.199  [2024-11-20 14:38:00.023513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:38:00.055954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:38:00.056029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Save initial chunk info metadata
00:27:21.199  [2024-11-20 14:38:00.056055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.331 ms
00:27:21.199  [2024-11-20 14:38:00.056068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:38:00.056856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:38:00.056892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:27:21.199  [2024-11-20 14:38:00.056915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.711 ms
00:27:21.199  [2024-11-20 14:38:00.056927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:38:00.139946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:38:00.140028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Wipe P2L region
00:27:21.199  [2024-11-20 14:38:00.140057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 82.914 ms
00:27:21.199  [2024-11-20 14:38:00.140071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.199  [2024-11-20 14:38:00.172948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.199  [2024-11-20 14:38:00.173023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim map
00:27:21.199  [2024-11-20 14:38:00.173048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.708 ms
00:27:21.199  [2024-11-20 14:38:00.173061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.458  [2024-11-20 14:38:00.205283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.458  [2024-11-20 14:38:00.205356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Clear trim log
00:27:21.458  [2024-11-20 14:38:00.205380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.141 ms
00:27:21.458  [2024-11-20 14:38:00.205393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.458  [2024-11-20 14:38:00.237365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.458  [2024-11-20 14:38:00.237648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:27:21.458  [2024-11-20 14:38:00.237687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.884 ms
00:27:21.458  [2024-11-20 14:38:00.237702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.458  [2024-11-20 14:38:00.237786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.458  [2024-11-20 14:38:00.237805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:27:21.458  [2024-11-20 14:38:00.237826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.009 ms
00:27:21.458  [2024-11-20 14:38:00.237838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.458  [2024-11-20 14:38:00.237987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:27:21.458  [2024-11-20 14:38:00.238011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:27:21.458  [2024-11-20 14:38:00.238027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:27:21.458  [2024-11-20 14:38:00.238039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:27:21.458  [2024-11-20 14:38:00.239097] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2458.789 ms, result 0
00:27:21.458  {
00:27:21.458    "name": "ftl0",
00:27:21.458    "uuid": "016b7f42-3b0a-4c4c-8075-b33620d527e3"
00:27:21.458  }
00:27:21.458   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": ['
00:27:21.458   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:27:21.718   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}'
00:27:21.718   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd
00:27:21.718   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0
00:27:21.975  /dev/nbd0
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct
00:27:21.975  1+0 records in
00:27:21.975  1+0 records out
00:27:21.975  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302585 s, 13.5 MB/s
00:27:21.975    14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0
00:27:21.975   14:38:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144
00:27:22.233  [2024-11-20 14:38:01.026125] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:27:22.233  [2024-11-20 14:38:01.026495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81419 ]
00:27:22.233  [2024-11-20 14:38:01.202594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:22.491  [2024-11-20 14:38:01.335751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:23.864  
[2024-11-20T14:38:03.782Z] Copying: 166/1024 [MB] (166 MBps)
[2024-11-20T14:38:04.715Z] Copying: 331/1024 [MB] (164 MBps)
[2024-11-20T14:38:06.087Z] Copying: 493/1024 [MB] (162 MBps)
[2024-11-20T14:38:07.020Z] Copying: 656/1024 [MB] (162 MBps)
[2024-11-20T14:38:07.952Z] Copying: 819/1024 [MB] (163 MBps)
[2024-11-20T14:38:08.210Z] Copying: 959/1024 [MB] (139 MBps)
[2024-11-20T14:38:09.584Z] Copying: 1024/1024 [MB] (average 158 MBps)
00:27:30.602  
00:27:30.602   14:38:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:27:32.511   14:38:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct
00:27:32.511  [2024-11-20 14:38:11.463078] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:27:32.511  [2024-11-20 14:38:11.463236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81522 ]
00:27:32.774  [2024-11-20 14:38:11.641299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:33.032  [2024-11-20 14:38:11.786006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:34.417  
[2024-11-20T14:38:14.332Z] Copying: 16/1024 [MB] (16 MBps)
[2024-11-20T14:38:15.265Z] Copying: 28/1024 [MB] (12 MBps)
[2024-11-20T14:38:16.208Z] Copying: 39/1024 [MB] (10 MBps)
[2024-11-20T14:38:17.140Z] Copying: 56/1024 [MB] (16 MBps)
[2024-11-20T14:38:18.511Z] Copying: 72/1024 [MB] (15 MBps)
[2024-11-20T14:38:19.445Z] Copying: 88/1024 [MB] (16 MBps)
[2024-11-20T14:38:20.378Z] Copying: 104/1024 [MB] (15 MBps)
[2024-11-20T14:38:21.315Z] Copying: 120/1024 [MB] (15 MBps)
[2024-11-20T14:38:22.250Z] Copying: 136/1024 [MB] (16 MBps)
[2024-11-20T14:38:23.184Z] Copying: 152/1024 [MB] (16 MBps)
[2024-11-20T14:38:24.120Z] Copying: 169/1024 [MB] (16 MBps)
[2024-11-20T14:38:25.495Z] Copying: 185/1024 [MB] (15 MBps)
[2024-11-20T14:38:26.428Z] Copying: 202/1024 [MB] (17 MBps)
[2024-11-20T14:38:27.360Z] Copying: 219/1024 [MB] (16 MBps)
[2024-11-20T14:38:28.296Z] Copying: 237/1024 [MB] (17 MBps)
[2024-11-20T14:38:29.231Z] Copying: 255/1024 [MB] (18 MBps)
[2024-11-20T14:38:30.166Z] Copying: 273/1024 [MB] (17 MBps)
[2024-11-20T14:38:31.101Z] Copying: 291/1024 [MB] (18 MBps)
[2024-11-20T14:38:32.531Z] Copying: 309/1024 [MB] (18 MBps)
[2024-11-20T14:38:33.097Z] Copying: 326/1024 [MB] (16 MBps)
[2024-11-20T14:38:34.471Z] Copying: 344/1024 [MB] (17 MBps)
[2024-11-20T14:38:35.403Z] Copying: 360/1024 [MB] (16 MBps)
[2024-11-20T14:38:36.336Z] Copying: 376/1024 [MB] (16 MBps)
[2024-11-20T14:38:37.268Z] Copying: 392/1024 [MB] (15 MBps)
[2024-11-20T14:38:38.202Z] Copying: 407/1024 [MB] (15 MBps)
[2024-11-20T14:38:39.136Z] Copying: 424/1024 [MB] (16 MBps)
[2024-11-20T14:38:40.511Z] Copying: 440/1024 [MB] (16 MBps)
[2024-11-20T14:38:41.446Z] Copying: 457/1024 [MB] (16 MBps)
[2024-11-20T14:38:42.381Z] Copying: 473/1024 [MB] (16 MBps)
[2024-11-20T14:38:43.313Z] Copying: 489/1024 [MB] (16 MBps)
[2024-11-20T14:38:44.246Z] Copying: 505/1024 [MB] (16 MBps)
[2024-11-20T14:38:45.179Z] Copying: 523/1024 [MB] (17 MBps)
[2024-11-20T14:38:46.115Z] Copying: 540/1024 [MB] (17 MBps)
[2024-11-20T14:38:47.491Z] Copying: 557/1024 [MB] (16 MBps)
[2024-11-20T14:38:48.424Z] Copying: 573/1024 [MB] (16 MBps)
[2024-11-20T14:38:49.357Z] Copying: 590/1024 [MB] (17 MBps)
[2024-11-20T14:38:50.291Z] Copying: 607/1024 [MB] (16 MBps)
[2024-11-20T14:38:51.225Z] Copying: 624/1024 [MB] (17 MBps)
[2024-11-20T14:38:52.158Z] Copying: 641/1024 [MB] (17 MBps)
[2024-11-20T14:38:53.530Z] Copying: 659/1024 [MB] (17 MBps)
[2024-11-20T14:38:54.463Z] Copying: 676/1024 [MB] (17 MBps)
[2024-11-20T14:38:55.396Z] Copying: 694/1024 [MB] (17 MBps)
[2024-11-20T14:38:56.331Z] Copying: 712/1024 [MB] (17 MBps)
[2024-11-20T14:38:57.264Z] Copying: 728/1024 [MB] (16 MBps)
[2024-11-20T14:38:58.200Z] Copying: 746/1024 [MB] (17 MBps)
[2024-11-20T14:38:59.135Z] Copying: 761/1024 [MB] (15 MBps)
[2024-11-20T14:39:00.509Z] Copying: 776/1024 [MB] (14 MBps)
[2024-11-20T14:39:01.443Z] Copying: 791/1024 [MB] (14 MBps)
[2024-11-20T14:39:02.377Z] Copying: 807/1024 [MB] (16 MBps)
[2024-11-20T14:39:03.311Z] Copying: 823/1024 [MB] (16 MBps)
[2024-11-20T14:39:04.245Z] Copying: 840/1024 [MB] (16 MBps)
[2024-11-20T14:39:05.231Z] Copying: 856/1024 [MB] (16 MBps)
[2024-11-20T14:39:06.164Z] Copying: 872/1024 [MB] (16 MBps)
[2024-11-20T14:39:07.099Z] Copying: 888/1024 [MB] (16 MBps)
[2024-11-20T14:39:08.481Z] Copying: 904/1024 [MB] (15 MBps)
[2024-11-20T14:39:09.415Z] Copying: 920/1024 [MB] (15 MBps)
[2024-11-20T14:39:10.345Z] Copying: 936/1024 [MB] (16 MBps)
[2024-11-20T14:39:11.278Z] Copying: 952/1024 [MB] (15 MBps)
[2024-11-20T14:39:12.212Z] Copying: 968/1024 [MB] (16 MBps)
[2024-11-20T14:39:13.146Z] Copying: 984/1024 [MB] (15 MBps)
[2024-11-20T14:39:14.536Z] Copying: 1000/1024 [MB] (16 MBps)
[2024-11-20T14:39:14.794Z] Copying: 1016/1024 [MB] (16 MBps)
[2024-11-20T14:39:15.727Z] Copying: 1024/1024 [MB] (average 16 MBps)
00:28:36.745  
00:28:36.745   14:39:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0
00:28:36.745   14:39:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0
00:28:37.004   14:39:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0
00:28:37.262  [2024-11-20 14:39:16.155538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.262  [2024-11-20 14:39:16.155631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:28:37.262  [2024-11-20 14:39:16.155655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:28:37.262  [2024-11-20 14:39:16.155674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.262  [2024-11-20 14:39:16.155713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:28:37.262  [2024-11-20 14:39:16.159112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.159150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:28:37.263  [2024-11-20 14:39:16.159170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.368 ms
00:28:37.263  [2024-11-20 14:39:16.159183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.263  [2024-11-20 14:39:16.160743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.160789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:28:37.263  [2024-11-20 14:39:16.160811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.515 ms
00:28:37.263  [2024-11-20 14:39:16.160824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.263  [2024-11-20 14:39:16.175896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.175949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:28:37.263  [2024-11-20 14:39:16.175972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.034 ms
00:28:37.263  [2024-11-20 14:39:16.175986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.263  [2024-11-20 14:39:16.182937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.182977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:28:37.263  [2024-11-20 14:39:16.182997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.897 ms
00:28:37.263  [2024-11-20 14:39:16.183010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.263  [2024-11-20 14:39:16.215268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.215336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:28:37.263  [2024-11-20 14:39:16.215361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.143 ms
00:28:37.263  [2024-11-20 14:39:16.215381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.263  [2024-11-20 14:39:16.234242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.234436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:28:37.263  [2024-11-20 14:39:16.234479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.782 ms
00:28:37.263  [2024-11-20 14:39:16.234493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.263  [2024-11-20 14:39:16.234724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.263  [2024-11-20 14:39:16.234748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:28:37.263  [2024-11-20 14:39:16.234765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.163 ms
00:28:37.263  [2024-11-20 14:39:16.234777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.523  [2024-11-20 14:39:16.266539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.523  [2024-11-20 14:39:16.266604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:28:37.523  [2024-11-20 14:39:16.266627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.729 ms
00:28:37.523  [2024-11-20 14:39:16.266640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.523  [2024-11-20 14:39:16.298092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.523  [2024-11-20 14:39:16.298140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:28:37.523  [2024-11-20 14:39:16.298163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.391 ms
00:28:37.523  [2024-11-20 14:39:16.298175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.523  [2024-11-20 14:39:16.330046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.523  [2024-11-20 14:39:16.330105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:28:37.524  [2024-11-20 14:39:16.330127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.806 ms
00:28:37.524  [2024-11-20 14:39:16.330140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.524  [2024-11-20 14:39:16.361612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.524  [2024-11-20 14:39:16.361658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:28:37.524  [2024-11-20 14:39:16.361680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.305 ms
00:28:37.524  [2024-11-20 14:39:16.361692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.524  [2024-11-20 14:39:16.361748] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:28:37.524  [2024-11-20 14:39:16.361773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.361988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.524  [2024-11-20 14:39:16.362973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.362987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.362999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:28:37.525  [2024-11-20 14:39:16.363225] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:28:37.525  [2024-11-20 14:39:16.363250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         016b7f42-3b0a-4c4c-8075-b33620d527e3
00:28:37.525  [2024-11-20 14:39:16.363262] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    0
00:28:37.525  [2024-11-20 14:39:16.363277] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:28:37.525  [2024-11-20 14:39:16.363291] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:28:37.525  [2024-11-20 14:39:16.363306] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:28:37.525  [2024-11-20 14:39:16.363317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:28:37.525  [2024-11-20 14:39:16.363331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:28:37.525  [2024-11-20 14:39:16.363343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:28:37.525  [2024-11-20 14:39:16.363355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:28:37.525  [2024-11-20 14:39:16.363365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:28:37.525  [2024-11-20 14:39:16.363379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.525  [2024-11-20 14:39:16.363391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:28:37.525  [2024-11-20 14:39:16.363433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.635 ms
00:28:37.525  [2024-11-20 14:39:16.363454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.525  [2024-11-20 14:39:16.380378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.525  [2024-11-20 14:39:16.380423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:28:37.525  [2024-11-20 14:39:16.380444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.841 ms
00:28:37.525  [2024-11-20 14:39:16.380457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.525  [2024-11-20 14:39:16.380938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:37.525  [2024-11-20 14:39:16.380965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:28:37.525  [2024-11-20 14:39:16.380983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.442 ms
00:28:37.525  [2024-11-20 14:39:16.380995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.525  [2024-11-20 14:39:16.436680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.525  [2024-11-20 14:39:16.436746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:28:37.525  [2024-11-20 14:39:16.436768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.525  [2024-11-20 14:39:16.436781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.525  [2024-11-20 14:39:16.436865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.525  [2024-11-20 14:39:16.436882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:28:37.525  [2024-11-20 14:39:16.436897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.525  [2024-11-20 14:39:16.436909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.525  [2024-11-20 14:39:16.437063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.525  [2024-11-20 14:39:16.437089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:28:37.525  [2024-11-20 14:39:16.437116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.525  [2024-11-20 14:39:16.437129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.525  [2024-11-20 14:39:16.437163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.525  [2024-11-20 14:39:16.437177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:28:37.525  [2024-11-20 14:39:16.437191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.525  [2024-11-20 14:39:16.437203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.541564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.541646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:28:37.784  [2024-11-20 14:39:16.541670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.541683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.626535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.626622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:28:37.784  [2024-11-20 14:39:16.626647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.626660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.626798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.626819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:28:37.784  [2024-11-20 14:39:16.626838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.626850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.626931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.626950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:28:37.784  [2024-11-20 14:39:16.626966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.626977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.627117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.627138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:28:37.784  [2024-11-20 14:39:16.627153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.627168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.627224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.627242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:28:37.784  [2024-11-20 14:39:16.627258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.627270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.627321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.627344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:28:37.784  [2024-11-20 14:39:16.627360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.627374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.627457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:28:37.784  [2024-11-20 14:39:16.627482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:28:37.784  [2024-11-20 14:39:16.627503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:28:37.784  [2024-11-20 14:39:16.627515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:37.784  [2024-11-20 14:39:16.627699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.128 ms, result 0
00:28:37.784  true
00:28:37.784   14:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81281
00:28:37.784   14:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81281
00:28:37.784   14:39:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144
00:28:37.784  [2024-11-20 14:39:16.757075] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:28:37.784  [2024-11-20 14:39:16.757471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82157 ]
00:28:38.043  [2024-11-20 14:39:16.942610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:38.301  [2024-11-20 14:39:17.048992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:39.678  
[2024-11-20T14:39:19.595Z] Copying: 165/1024 [MB] (165 MBps)
[2024-11-20T14:39:20.527Z] Copying: 329/1024 [MB] (163 MBps)
[2024-11-20T14:39:21.461Z] Copying: 492/1024 [MB] (163 MBps)
[2024-11-20T14:39:22.397Z] Copying: 658/1024 [MB] (165 MBps)
[2024-11-20T14:39:23.387Z] Copying: 822/1024 [MB] (164 MBps)
[2024-11-20T14:39:23.662Z] Copying: 988/1024 [MB] (165 MBps)
[2024-11-20T14:39:24.597Z] Copying: 1024/1024 [MB] (average 164 MBps)
00:28:45.615  
00:28:45.615  /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81281 Killed                  "$SPDK_BIN_DIR/spdk_tgt" -m 0x1
00:28:45.615   14:39:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:28:45.873  [2024-11-20 14:39:24.650424] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:28:45.873  [2024-11-20 14:39:24.650595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82239 ]
00:28:45.873  [2024-11-20 14:39:24.830355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:46.131  [2024-11-20 14:39:24.935095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:46.389  [2024-11-20 14:39:25.260990] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:28:46.389  [2024-11-20 14:39:25.261072] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:28:46.389  [2024-11-20 14:39:25.328000] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore
00:28:46.389  [2024-11-20 14:39:25.328346] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:28:46.389  [2024-11-20 14:39:25.328553] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:28:46.648  [2024-11-20 14:39:25.563924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.563997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:28:46.648  [2024-11-20 14:39:25.564020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:28:46.648  [2024-11-20 14:39:25.564032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.564110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.564129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:28:46.648  [2024-11-20 14:39:25.564144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.043 ms
00:28:46.648  [2024-11-20 14:39:25.564155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.564187] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:28:46.648  [2024-11-20 14:39:25.565150] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:28:46.648  [2024-11-20 14:39:25.565186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.565201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:28:46.648  [2024-11-20 14:39:25.565221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.006 ms
00:28:46.648  [2024-11-20 14:39:25.565241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.566598] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:28:46.648  [2024-11-20 14:39:25.583038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.583097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:28:46.648  [2024-11-20 14:39:25.583117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.441 ms
00:28:46.648  [2024-11-20 14:39:25.583130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.583212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.583243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:28:46.648  [2024-11-20 14:39:25.583260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.029 ms
00:28:46.648  [2024-11-20 14:39:25.583273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.587893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.587950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:28:46.648  [2024-11-20 14:39:25.587969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.512 ms
00:28:46.648  [2024-11-20 14:39:25.587981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.588091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.588112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:28:46.648  [2024-11-20 14:39:25.588126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.069 ms
00:28:46.648  [2024-11-20 14:39:25.588138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.588208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.588226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:28:46.648  [2024-11-20 14:39:25.588238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:28:46.648  [2024-11-20 14:39:25.588249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.588285] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:28:46.648  [2024-11-20 14:39:25.592610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.592651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:28:46.648  [2024-11-20 14:39:25.592669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.334 ms
00:28:46.648  [2024-11-20 14:39:25.592681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.592721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.592737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:28:46.648  [2024-11-20 14:39:25.592750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:28:46.648  [2024-11-20 14:39:25.592761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.592819] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:28:46.648  [2024-11-20 14:39:25.592851] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:28:46.648  [2024-11-20 14:39:25.592896] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:28:46.648  [2024-11-20 14:39:25.592918] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:28:46.648  [2024-11-20 14:39:25.593033] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:28:46.648  [2024-11-20 14:39:25.593050] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:28:46.648  [2024-11-20 14:39:25.593066] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:28:46.648  [2024-11-20 14:39:25.593081] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593101] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593114] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:28:46.648  [2024-11-20 14:39:25.593125] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:28:46.648  [2024-11-20 14:39:25.593136] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:28:46.648  [2024-11-20 14:39:25.593147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:28:46.648  [2024-11-20 14:39:25.593159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.593171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:28:46.648  [2024-11-20 14:39:25.593183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.344 ms
00:28:46.648  [2024-11-20 14:39:25.593195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.593296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.648  [2024-11-20 14:39:25.593316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:28:46.648  [2024-11-20 14:39:25.593329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:28:46.648  [2024-11-20 14:39:25.593340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.648  [2024-11-20 14:39:25.593487] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:28:46.648  [2024-11-20 14:39:25.593508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:28:46.648  [2024-11-20 14:39:25.593522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:28:46.648  [2024-11-20 14:39:25.593557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:28:46.648  [2024-11-20 14:39:25.593622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:28:46.648  [2024-11-20 14:39:25.593645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:28:46.648  [2024-11-20 14:39:25.593669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:28:46.648  [2024-11-20 14:39:25.593681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:28:46.648  [2024-11-20 14:39:25.593691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:28:46.648  [2024-11-20 14:39:25.593702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:28:46.648  [2024-11-20 14:39:25.593713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:28:46.648  [2024-11-20 14:39:25.593737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:28:46.648  [2024-11-20 14:39:25.593769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:28:46.648  [2024-11-20 14:39:25.593801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:28:46.648  [2024-11-20 14:39:25.593833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:28:46.648  [2024-11-20 14:39:25.593865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:28:46.648  [2024-11-20 14:39:25.593887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:28:46.648  [2024-11-20 14:39:25.593898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:28:46.648  [2024-11-20 14:39:25.593909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:28:46.648  [2024-11-20 14:39:25.593920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:28:46.648  [2024-11-20 14:39:25.593930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:28:46.648  [2024-11-20 14:39:25.593941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:28:46.648  [2024-11-20 14:39:25.593952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:28:46.649  [2024-11-20 14:39:25.593963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:28:46.649  [2024-11-20 14:39:25.593973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:46.649  [2024-11-20 14:39:25.593984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:28:46.649  [2024-11-20 14:39:25.593994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:28:46.649  [2024-11-20 14:39:25.594005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:46.649  [2024-11-20 14:39:25.594015] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:28:46.649  [2024-11-20 14:39:25.594028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:28:46.649  [2024-11-20 14:39:25.594039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:28:46.649  [2024-11-20 14:39:25.594055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:28:46.649  [2024-11-20 14:39:25.594066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:28:46.649  [2024-11-20 14:39:25.594077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:28:46.649  [2024-11-20 14:39:25.594090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:28:46.649  [2024-11-20 14:39:25.594102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:28:46.649  [2024-11-20 14:39:25.594112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:28:46.649  [2024-11-20 14:39:25.594122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:28:46.649  [2024-11-20 14:39:25.594135] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:28:46.649  [2024-11-20 14:39:25.594149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:28:46.649  [2024-11-20 14:39:25.594174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:28:46.649  [2024-11-20 14:39:25.594186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:28:46.649  [2024-11-20 14:39:25.594197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:28:46.649  [2024-11-20 14:39:25.594209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:28:46.649  [2024-11-20 14:39:25.594222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:28:46.649  [2024-11-20 14:39:25.594233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:28:46.649  [2024-11-20 14:39:25.594245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:28:46.649  [2024-11-20 14:39:25.594257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:28:46.649  [2024-11-20 14:39:25.594269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:28:46.649  [2024-11-20 14:39:25.594332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:28:46.649  [2024-11-20 14:39:25.594345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:28:46.649  [2024-11-20 14:39:25.594370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:28:46.649  [2024-11-20 14:39:25.594381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:28:46.649  [2024-11-20 14:39:25.594393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:28:46.649  [2024-11-20 14:39:25.594406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.649  [2024-11-20 14:39:25.594418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:28:46.649  [2024-11-20 14:39:25.594430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.990 ms
00:28:46.649  [2024-11-20 14:39:25.594442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.649  [2024-11-20 14:39:25.627852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.649  [2024-11-20 14:39:25.627920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:28:46.649  [2024-11-20 14:39:25.627943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.320 ms
00:28:46.649  [2024-11-20 14:39:25.627957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.649  [2024-11-20 14:39:25.628075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.649  [2024-11-20 14:39:25.628098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:28:46.649  [2024-11-20 14:39:25.628112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.067 ms
00:28:46.649  [2024-11-20 14:39:25.628123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.907  [2024-11-20 14:39:25.682008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.907  [2024-11-20 14:39:25.682076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:28:46.907  [2024-11-20 14:39:25.682103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 53.784 ms
00:28:46.907  [2024-11-20 14:39:25.682116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.907  [2024-11-20 14:39:25.682201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.907  [2024-11-20 14:39:25.682219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:28:46.907  [2024-11-20 14:39:25.682233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:28:46.907  [2024-11-20 14:39:25.682244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.907  [2024-11-20 14:39:25.682657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.907  [2024-11-20 14:39:25.682678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:28:46.907  [2024-11-20 14:39:25.682693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.302 ms
00:28:46.907  [2024-11-20 14:39:25.682704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.907  [2024-11-20 14:39:25.682873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.907  [2024-11-20 14:39:25.682892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:28:46.907  [2024-11-20 14:39:25.682904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.129 ms
00:28:46.907  [2024-11-20 14:39:25.682915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.907  [2024-11-20 14:39:25.699756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.907  [2024-11-20 14:39:25.699821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:28:46.907  [2024-11-20 14:39:25.699842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.809 ms
00:28:46.907  [2024-11-20 14:39:25.699855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.716535] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:28:46.908  [2024-11-20 14:39:25.716785] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:28:46.908  [2024-11-20 14:39:25.716814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.716828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:28:46.908  [2024-11-20 14:39:25.716843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.769 ms
00:28:46.908  [2024-11-20 14:39:25.716855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.746926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.747015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:28:46.908  [2024-11-20 14:39:25.747062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.001 ms
00:28:46.908  [2024-11-20 14:39:25.747076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.765629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.765700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:28:46.908  [2024-11-20 14:39:25.765721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.450 ms
00:28:46.908  [2024-11-20 14:39:25.765733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.781884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.782181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:28:46.908  [2024-11-20 14:39:25.782213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.064 ms
00:28:46.908  [2024-11-20 14:39:25.782227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.783150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.783189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:28:46.908  [2024-11-20 14:39:25.783205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.706 ms
00:28:46.908  [2024-11-20 14:39:25.783217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.858745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.858822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:28:46.908  [2024-11-20 14:39:25.858844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 75.498 ms
00:28:46.908  [2024-11-20 14:39:25.858857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.872140] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:28:46.908  [2024-11-20 14:39:25.875017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.875055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:28:46.908  [2024-11-20 14:39:25.875076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.057 ms
00:28:46.908  [2024-11-20 14:39:25.875087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.875238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.875271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:28:46.908  [2024-11-20 14:39:25.875287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.010 ms
00:28:46.908  [2024-11-20 14:39:25.875299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.875398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.875430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:28:46.908  [2024-11-20 14:39:25.875455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.036 ms
00:28:46.908  [2024-11-20 14:39:25.875474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.875519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.875552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:28:46.908  [2024-11-20 14:39:25.875565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:28:46.908  [2024-11-20 14:39:25.875601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:46.908  [2024-11-20 14:39:25.875648] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:28:46.908  [2024-11-20 14:39:25.875664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:46.908  [2024-11-20 14:39:25.875676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:28:46.908  [2024-11-20 14:39:25.875688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:28:46.908  [2024-11-20 14:39:25.875699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:47.166  [2024-11-20 14:39:25.908112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:47.166  [2024-11-20 14:39:25.908184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:28:47.166  [2024-11-20 14:39:25.908205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.375 ms
00:28:47.166  [2024-11-20 14:39:25.908219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:47.166  [2024-11-20 14:39:25.908347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:28:47.166  [2024-11-20 14:39:25.908368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:28:47.166  [2024-11-20 14:39:25.908382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.051 ms
00:28:47.166  [2024-11-20 14:39:25.908394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:28:47.166  [2024-11-20 14:39:25.910305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 345.542 ms, result 0
00:28:48.100  
[2024-11-20T14:39:28.016Z] Copying: 27/1024 [MB] (27 MBps)
[2024-11-20T14:39:28.949Z] Copying: 55/1024 [MB] (27 MBps)
[2024-11-20T14:39:30.320Z] Copying: 82/1024 [MB] (27 MBps)
[2024-11-20T14:39:31.252Z] Copying: 115/1024 [MB] (32 MBps)
[2024-11-20T14:39:32.184Z] Copying: 144/1024 [MB] (29 MBps)
[2024-11-20T14:39:33.116Z] Copying: 176/1024 [MB] (31 MBps)
[2024-11-20T14:39:34.080Z] Copying: 206/1024 [MB] (29 MBps)
[2024-11-20T14:39:35.029Z] Copying: 235/1024 [MB] (29 MBps)
[2024-11-20T14:39:35.963Z] Copying: 264/1024 [MB] (28 MBps)
[2024-11-20T14:39:37.336Z] Copying: 296/1024 [MB] (32 MBps)
[2024-11-20T14:39:38.271Z] Copying: 327/1024 [MB] (30 MBps)
[2024-11-20T14:39:39.204Z] Copying: 352/1024 [MB] (25 MBps)
[2024-11-20T14:39:40.136Z] Copying: 376/1024 [MB] (24 MBps)
[2024-11-20T14:39:41.068Z] Copying: 405/1024 [MB] (28 MBps)
[2024-11-20T14:39:42.001Z] Copying: 434/1024 [MB] (29 MBps)
[2024-11-20T14:39:42.947Z] Copying: 465/1024 [MB] (30 MBps)
[2024-11-20T14:39:43.959Z] Copying: 494/1024 [MB] (29 MBps)
[2024-11-20T14:39:45.332Z] Copying: 525/1024 [MB] (30 MBps)
[2024-11-20T14:39:46.265Z] Copying: 554/1024 [MB] (29 MBps)
[2024-11-20T14:39:47.201Z] Copying: 582/1024 [MB] (28 MBps)
[2024-11-20T14:39:48.135Z] Copying: 610/1024 [MB] (27 MBps)
[2024-11-20T14:39:49.070Z] Copying: 639/1024 [MB] (29 MBps)
[2024-11-20T14:39:50.006Z] Copying: 668/1024 [MB] (28 MBps)
[2024-11-20T14:39:50.940Z] Copying: 697/1024 [MB] (28 MBps)
[2024-11-20T14:39:52.351Z] Copying: 725/1024 [MB] (28 MBps)
[2024-11-20T14:39:53.282Z] Copying: 754/1024 [MB] (28 MBps)
[2024-11-20T14:39:54.215Z] Copying: 787/1024 [MB] (32 MBps)
[2024-11-20T14:39:55.146Z] Copying: 817/1024 [MB] (30 MBps)
[2024-11-20T14:39:56.078Z] Copying: 848/1024 [MB] (31 MBps)
[2024-11-20T14:39:57.010Z] Copying: 880/1024 [MB] (32 MBps)
[2024-11-20T14:39:57.945Z] Copying: 911/1024 [MB] (30 MBps)
[2024-11-20T14:39:59.316Z] Copying: 942/1024 [MB] (31 MBps)
[2024-11-20T14:40:00.247Z] Copying: 974/1024 [MB] (31 MBps)
[2024-11-20T14:40:01.180Z] Copying: 1006/1024 [MB] (31 MBps)
[2024-11-20T14:40:01.776Z] Copying: 1023/1024 [MB] (17 MBps)
[2024-11-20T14:40:01.776Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 14:40:01.719026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:22.794  [2024-11-20 14:40:01.719103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:29:22.794  [2024-11-20 14:40:01.719129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:29:22.794  [2024-11-20 14:40:01.719142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:22.794  [2024-11-20 14:40:01.720385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:29:22.794  [2024-11-20 14:40:01.726226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:22.794  [2024-11-20 14:40:01.726422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:29:22.794  [2024-11-20 14:40:01.726455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.781 ms
00:29:22.794  [2024-11-20 14:40:01.726469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:22.794  [2024-11-20 14:40:01.740268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:22.794  [2024-11-20 14:40:01.740321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:29:22.794  [2024-11-20 14:40:01.740341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 10.170 ms
00:29:22.794  [2024-11-20 14:40:01.740354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:22.794  [2024-11-20 14:40:01.760784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:22.794  [2024-11-20 14:40:01.760835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:29:22.794  [2024-11-20 14:40:01.760854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 20.405 ms
00:29:22.794  [2024-11-20 14:40:01.760867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:22.794  [2024-11-20 14:40:01.767579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:22.794  [2024-11-20 14:40:01.767776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:29:22.794  [2024-11-20 14:40:01.767805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.668 ms
00:29:22.794  [2024-11-20 14:40:01.767819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.052  [2024-11-20 14:40:01.799138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.052  [2024-11-20 14:40:01.799371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:29:23.052  [2024-11-20 14:40:01.799405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.252 ms
00:29:23.052  [2024-11-20 14:40:01.799431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.052  [2024-11-20 14:40:01.817260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.052  [2024-11-20 14:40:01.817330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:29:23.052  [2024-11-20 14:40:01.817350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.756 ms
00:29:23.052  [2024-11-20 14:40:01.817363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.052  [2024-11-20 14:40:01.902119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.052  [2024-11-20 14:40:01.902476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:29:23.052  [2024-11-20 14:40:01.902531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 84.680 ms
00:29:23.052  [2024-11-20 14:40:01.902546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.052  [2024-11-20 14:40:01.936241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.052  [2024-11-20 14:40:01.936337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:29:23.052  [2024-11-20 14:40:01.936360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.628 ms
00:29:23.052  [2024-11-20 14:40:01.936372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.052  [2024-11-20 14:40:01.969526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.052  [2024-11-20 14:40:01.969800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:29:23.052  [2024-11-20 14:40:01.969834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.067 ms
00:29:23.052  [2024-11-20 14:40:01.969848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.052  [2024-11-20 14:40:02.002700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.052  [2024-11-20 14:40:02.002765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:29:23.052  [2024-11-20 14:40:02.002785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.785 ms
00:29:23.052  [2024-11-20 14:40:02.002797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.311  [2024-11-20 14:40:02.033775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.311  [2024-11-20 14:40:02.033830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:29:23.311  [2024-11-20 14:40:02.033849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.854 ms
00:29:23.311  [2024-11-20 14:40:02.033861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.311  [2024-11-20 14:40:02.033911] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:29:23.311  [2024-11-20 14:40:02.033937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   130048 / 261120 	wr_cnt: 1	state: open
00:29:23.311  [2024-11-20 14:40:02.033952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.311  [2024-11-20 14:40:02.033965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.311  [2024-11-20 14:40:02.033977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.311  [2024-11-20 14:40:02.033989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.311  [2024-11-20 14:40:02.034001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.034984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.312  [2024-11-20 14:40:02.035542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:29:23.313  [2024-11-20 14:40:02.035724] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:29:23.313  [2024-11-20 14:40:02.035740] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         016b7f42-3b0a-4c4c-8075-b33620d527e3
00:29:23.313  [2024-11-20 14:40:02.035756] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    130048
00:29:23.313  [2024-11-20 14:40:02.035780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        131008
00:29:23.313  [2024-11-20 14:40:02.035815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         130048
00:29:23.313  [2024-11-20 14:40:02.035837] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0074
00:29:23.313  [2024-11-20 14:40:02.035857] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:29:23.313  [2024-11-20 14:40:02.035878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:29:23.313  [2024-11-20 14:40:02.035898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:29:23.313  [2024-11-20 14:40:02.035918] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:29:23.313  [2024-11-20 14:40:02.035938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:29:23.313  [2024-11-20 14:40:02.035961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.313  [2024-11-20 14:40:02.035981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:29:23.313  [2024-11-20 14:40:02.036003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 2.051 ms
00:29:23.313  [2024-11-20 14:40:02.036024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.052763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.313  [2024-11-20 14:40:02.052941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:29:23.313  [2024-11-20 14:40:02.052971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.623 ms
00:29:23.313  [2024-11-20 14:40:02.052984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.053435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:23.313  [2024-11-20 14:40:02.053461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:29:23.313  [2024-11-20 14:40:02.053476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.415 ms
00:29:23.313  [2024-11-20 14:40:02.053492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.096540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.096622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:29:23.313  [2024-11-20 14:40:02.096642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.096655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.096734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.096750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:29:23.313  [2024-11-20 14:40:02.096763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.096781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.096880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.096900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:29:23.313  [2024-11-20 14:40:02.096913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.096924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.096947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.096961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:29:23.313  [2024-11-20 14:40:02.096972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.096984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.200860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.200930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:29:23.313  [2024-11-20 14:40:02.200951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.200963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.285905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.285977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:29:23.313  [2024-11-20 14:40:02.285997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.286145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:29:23.313  [2024-11-20 14:40:02.286158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.286233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:29:23.313  [2024-11-20 14:40:02.286245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.286402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:29:23.313  [2024-11-20 14:40:02.286415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.286494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:29:23.313  [2024-11-20 14:40:02.286506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.286616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:29:23.313  [2024-11-20 14:40:02.286629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:29:23.313  [2024-11-20 14:40:02.286711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:29:23.313  [2024-11-20 14:40:02.286723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:29:23.313  [2024-11-20 14:40:02.286734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:23.313  [2024-11-20 14:40:02.286874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 571.023 ms, result 0
00:29:24.684  
00:29:24.684  
00:29:24.684   14:40:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2
00:29:27.208   14:40:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:29:27.208  [2024-11-20 14:40:05.945872] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:29:27.208  [2024-11-20 14:40:05.946022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82638 ]
00:29:27.208  [2024-11-20 14:40:06.120214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:27.466  [2024-11-20 14:40:06.223694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:27.723  [2024-11-20 14:40:06.551841] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:29:27.723  [2024-11-20 14:40:06.551929] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:29:27.983  [2024-11-20 14:40:06.712752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.712804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:29:27.983  [2024-11-20 14:40:06.712828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:29:27.983  [2024-11-20 14:40:06.712841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.712908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.712926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:29:27.983  [2024-11-20 14:40:06.712942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.039 ms
00:29:27.983  [2024-11-20 14:40:06.712953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.712984] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:29:27.983  [2024-11-20 14:40:06.714039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:29:27.983  [2024-11-20 14:40:06.714144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.714162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:29:27.983  [2024-11-20 14:40:06.714176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.166 ms
00:29:27.983  [2024-11-20 14:40:06.714187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.715352] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:29:27.983  [2024-11-20 14:40:06.731680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.731733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:29:27.983  [2024-11-20 14:40:06.731752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.329 ms
00:29:27.983  [2024-11-20 14:40:06.731764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.731848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.731870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:29:27.983  [2024-11-20 14:40:06.731893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.026 ms
00:29:27.983  [2024-11-20 14:40:06.731905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.736393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.736446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:29:27.983  [2024-11-20 14:40:06.736463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.375 ms
00:29:27.983  [2024-11-20 14:40:06.736482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.736602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.736622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:29:27.983  [2024-11-20 14:40:06.736648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.083 ms
00:29:27.983  [2024-11-20 14:40:06.736660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.736733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.736751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:29:27.983  [2024-11-20 14:40:06.736764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.015 ms
00:29:27.983  [2024-11-20 14:40:06.736775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.736816] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:29:27.983  [2024-11-20 14:40:06.741303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.741357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:29:27.983  [2024-11-20 14:40:06.741375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.502 ms
00:29:27.983  [2024-11-20 14:40:06.741393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.741436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.983  [2024-11-20 14:40:06.741451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:29:27.983  [2024-11-20 14:40:06.741464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.013 ms
00:29:27.983  [2024-11-20 14:40:06.741475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.983  [2024-11-20 14:40:06.741526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:29:27.983  [2024-11-20 14:40:06.741558] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:29:27.983  [2024-11-20 14:40:06.741631] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:29:27.983  [2024-11-20 14:40:06.741658] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:29:27.983  [2024-11-20 14:40:06.741771] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:29:27.983  [2024-11-20 14:40:06.741787] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:29:27.983  [2024-11-20 14:40:06.741801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:29:27.983  [2024-11-20 14:40:06.741817] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:29:27.983  [2024-11-20 14:40:06.741830] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:29:27.983  [2024-11-20 14:40:06.741842] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:29:27.983  [2024-11-20 14:40:06.741852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:29:27.983  [2024-11-20 14:40:06.741863] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:29:27.983  [2024-11-20 14:40:06.741885] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:29:27.984  [2024-11-20 14:40:06.741898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.984  [2024-11-20 14:40:06.741909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:29:27.984  [2024-11-20 14:40:06.741921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.376 ms
00:29:27.984  [2024-11-20 14:40:06.741933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.984  [2024-11-20 14:40:06.742034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.984  [2024-11-20 14:40:06.742050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:29:27.984  [2024-11-20 14:40:06.742062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.071 ms
00:29:27.984  [2024-11-20 14:40:06.742073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.984  [2024-11-20 14:40:06.742231] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:29:27.984  [2024-11-20 14:40:06.742265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:29:27.984  [2024-11-20 14:40:06.742279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:29:27.984  [2024-11-20 14:40:06.742312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:29:27.984  [2024-11-20 14:40:06.742343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:29:27.984  [2024-11-20 14:40:06.742366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:29:27.984  [2024-11-20 14:40:06.742376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:29:27.984  [2024-11-20 14:40:06.742386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:29:27.984  [2024-11-20 14:40:06.742397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:29:27.984  [2024-11-20 14:40:06.742407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:29:27.984  [2024-11-20 14:40:06.742431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:29:27.984  [2024-11-20 14:40:06.742454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:29:27.984  [2024-11-20 14:40:06.742485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:29:27.984  [2024-11-20 14:40:06.742515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:29:27.984  [2024-11-20 14:40:06.742546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:29:27.984  [2024-11-20 14:40:06.742603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:29:27.984  [2024-11-20 14:40:06.742635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:29:27.984  [2024-11-20 14:40:06.742655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:29:27.984  [2024-11-20 14:40:06.742666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:29:27.984  [2024-11-20 14:40:06.742676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:29:27.984  [2024-11-20 14:40:06.742688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:29:27.984  [2024-11-20 14:40:06.742698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:29:27.984  [2024-11-20 14:40:06.742708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:29:27.984  [2024-11-20 14:40:06.742729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:29:27.984  [2024-11-20 14:40:06.742739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742749] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:29:27.984  [2024-11-20 14:40:06.742760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:29:27.984  [2024-11-20 14:40:06.742771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:29:27.984  [2024-11-20 14:40:06.742794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:29:27.984  [2024-11-20 14:40:06.742806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:29:27.984  [2024-11-20 14:40:06.742817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:29:27.984  [2024-11-20 14:40:06.742827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:29:27.984  [2024-11-20 14:40:06.742837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:29:27.984  [2024-11-20 14:40:06.742848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:29:27.984  [2024-11-20 14:40:06.742860] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:29:27.984  [2024-11-20 14:40:06.742874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.742887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:29:27.984  [2024-11-20 14:40:06.742898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:29:27.984  [2024-11-20 14:40:06.742910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:29:27.984  [2024-11-20 14:40:06.742921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:29:27.984  [2024-11-20 14:40:06.742932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:29:27.984  [2024-11-20 14:40:06.742943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:29:27.984  [2024-11-20 14:40:06.742955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:29:27.984  [2024-11-20 14:40:06.742966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:29:27.984  [2024-11-20 14:40:06.742977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:29:27.984  [2024-11-20 14:40:06.742988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.742999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.743010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.743022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.743034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:29:27.984  [2024-11-20 14:40:06.743045] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:29:27.984  [2024-11-20 14:40:06.743063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.743075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:29:27.984  [2024-11-20 14:40:06.743087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:29:27.984  [2024-11-20 14:40:06.743099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:29:27.984  [2024-11-20 14:40:06.743110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:29:27.984  [2024-11-20 14:40:06.743123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.984  [2024-11-20 14:40:06.743134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:29:27.984  [2024-11-20 14:40:06.743146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.968 ms
00:29:27.984  [2024-11-20 14:40:06.743157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.984  [2024-11-20 14:40:06.776818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.984  [2024-11-20 14:40:06.777032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:29:27.984  [2024-11-20 14:40:06.777172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.593 ms
00:29:27.984  [2024-11-20 14:40:06.777246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.984  [2024-11-20 14:40:06.777436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.984  [2024-11-20 14:40:06.777489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:29:27.984  [2024-11-20 14:40:06.777606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.066 ms
00:29:27.984  [2024-11-20 14:40:06.777719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.984  [2024-11-20 14:40:06.834535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.984  [2024-11-20 14:40:06.834767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:29:27.984  [2024-11-20 14:40:06.834900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 56.677 ms
00:29:27.984  [2024-11-20 14:40:06.834954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.984  [2024-11-20 14:40:06.835109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.835202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:29:27.985  [2024-11-20 14:40:06.835343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:29:27.985  [2024-11-20 14:40:06.835411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.835920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.836064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:29:27.985  [2024-11-20 14:40:06.836180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.302 ms
00:29:27.985  [2024-11-20 14:40:06.836230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.836478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.836541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:29:27.985  [2024-11-20 14:40:06.836669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.150 ms
00:29:27.985  [2024-11-20 14:40:06.836814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.853969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.854159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:29:27.985  [2024-11-20 14:40:06.854295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.015 ms
00:29:27.985  [2024-11-20 14:40:06.854348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.870963] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0
00:29:27.985  [2024-11-20 14:40:06.871155] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:29:27.985  [2024-11-20 14:40:06.871187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.871201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:29:27.985  [2024-11-20 14:40:06.871214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.579 ms
00:29:27.985  [2024-11-20 14:40:06.871226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.901299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.901511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:29:27.985  [2024-11-20 14:40:06.901543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 29.997 ms
00:29:27.985  [2024-11-20 14:40:06.901557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.917512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.917589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:29:27.985  [2024-11-20 14:40:06.917610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.864 ms
00:29:27.985  [2024-11-20 14:40:06.917623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.933210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.933259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:29:27.985  [2024-11-20 14:40:06.933277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.534 ms
00:29:27.985  [2024-11-20 14:40:06.933289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:27.985  [2024-11-20 14:40:06.934127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:27.985  [2024-11-20 14:40:06.934165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:29:27.985  [2024-11-20 14:40:06.934183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.696 ms
00:29:27.985  [2024-11-20 14:40:06.934199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.243  [2024-11-20 14:40:07.008311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.243  [2024-11-20 14:40:07.008546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:29:28.243  [2024-11-20 14:40:07.008601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 74.084 ms
00:29:28.243  [2024-11-20 14:40:07.008615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.243  [2024-11-20 14:40:07.021437] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:29:28.243  [2024-11-20 14:40:07.024100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.243  [2024-11-20 14:40:07.024139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:29:28.243  [2024-11-20 14:40:07.024158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.418 ms
00:29:28.243  [2024-11-20 14:40:07.024171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.243  [2024-11-20 14:40:07.024293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.243  [2024-11-20 14:40:07.024313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:29:28.243  [2024-11-20 14:40:07.024328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:29:28.243  [2024-11-20 14:40:07.024343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.243  [2024-11-20 14:40:07.025994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.243  [2024-11-20 14:40:07.026035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:29:28.243  [2024-11-20 14:40:07.026051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.592 ms
00:29:28.243  [2024-11-20 14:40:07.026063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.244  [2024-11-20 14:40:07.026101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.244  [2024-11-20 14:40:07.026116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:29:28.244  [2024-11-20 14:40:07.026129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.006 ms
00:29:28.244  [2024-11-20 14:40:07.026140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.244  [2024-11-20 14:40:07.026188] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:29:28.244  [2024-11-20 14:40:07.026206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.244  [2024-11-20 14:40:07.026217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:29:28.244  [2024-11-20 14:40:07.026229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:29:28.244  [2024-11-20 14:40:07.026239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.244  [2024-11-20 14:40:07.057613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.244  [2024-11-20 14:40:07.057786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:29:28.244  [2024-11-20 14:40:07.057816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.347 ms
00:29:28.244  [2024-11-20 14:40:07.057839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.244  [2024-11-20 14:40:07.057928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:29:28.244  [2024-11-20 14:40:07.057947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:29:28.244  [2024-11-20 14:40:07.057960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.042 ms
00:29:28.244  [2024-11-20 14:40:07.057971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:29:28.244  [2024-11-20 14:40:07.061587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.291 ms, result 0
00:29:29.615  
[2024-11-20T14:40:09.530Z] Copying: 820/1048576 [kB] (820 kBps)
[2024-11-20T14:40:10.461Z] Copying: 4224/1048576 [kB] (3404 kBps)
[2024-11-20T14:40:11.393Z] Copying: 25/1024 [MB] (21 MBps)
[2024-11-20T14:40:12.327Z] Copying: 55/1024 [MB] (29 MBps)
[2024-11-20T14:40:13.700Z] Copying: 85/1024 [MB] (30 MBps)
[2024-11-20T14:40:14.635Z] Copying: 117/1024 [MB] (31 MBps)
[2024-11-20T14:40:15.662Z] Copying: 148/1024 [MB] (31 MBps)
[2024-11-20T14:40:16.596Z] Copying: 177/1024 [MB] (29 MBps)
[2024-11-20T14:40:17.529Z] Copying: 204/1024 [MB] (27 MBps)
[2024-11-20T14:40:18.462Z] Copying: 233/1024 [MB] (28 MBps)
[2024-11-20T14:40:19.395Z] Copying: 264/1024 [MB] (30 MBps)
[2024-11-20T14:40:20.328Z] Copying: 295/1024 [MB] (30 MBps)
[2024-11-20T14:40:21.700Z] Copying: 325/1024 [MB] (30 MBps)
[2024-11-20T14:40:22.633Z] Copying: 356/1024 [MB] (30 MBps)
[2024-11-20T14:40:23.565Z] Copying: 387/1024 [MB] (30 MBps)
[2024-11-20T14:40:24.499Z] Copying: 418/1024 [MB] (30 MBps)
[2024-11-20T14:40:25.433Z] Copying: 447/1024 [MB] (29 MBps)
[2024-11-20T14:40:26.367Z] Copying: 478/1024 [MB] (31 MBps)
[2024-11-20T14:40:27.300Z] Copying: 509/1024 [MB] (31 MBps)
[2024-11-20T14:40:28.674Z] Copying: 540/1024 [MB] (31 MBps)
[2024-11-20T14:40:29.608Z] Copying: 571/1024 [MB] (30 MBps)
[2024-11-20T14:40:30.544Z] Copying: 600/1024 [MB] (28 MBps)
[2024-11-20T14:40:31.477Z] Copying: 630/1024 [MB] (30 MBps)
[2024-11-20T14:40:32.410Z] Copying: 662/1024 [MB] (31 MBps)
[2024-11-20T14:40:33.342Z] Copying: 693/1024 [MB] (31 MBps)
[2024-11-20T14:40:34.351Z] Copying: 723/1024 [MB] (29 MBps)
[2024-11-20T14:40:35.308Z] Copying: 754/1024 [MB] (30 MBps)
[2024-11-20T14:40:36.694Z] Copying: 783/1024 [MB] (29 MBps)
[2024-11-20T14:40:37.628Z] Copying: 809/1024 [MB] (26 MBps)
[2024-11-20T14:40:38.558Z] Copying: 838/1024 [MB] (28 MBps)
[2024-11-20T14:40:39.489Z] Copying: 867/1024 [MB] (29 MBps)
[2024-11-20T14:40:40.424Z] Copying: 895/1024 [MB] (28 MBps)
[2024-11-20T14:40:41.358Z] Copying: 924/1024 [MB] (28 MBps)
[2024-11-20T14:40:42.291Z] Copying: 955/1024 [MB] (31 MBps)
[2024-11-20T14:40:43.661Z] Copying: 985/1024 [MB] (29 MBps)
[2024-11-20T14:40:43.661Z] Copying: 1014/1024 [MB] (29 MBps)
[2024-11-20T14:40:44.230Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 14:40:43.957728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.248  [2024-11-20 14:40:43.957822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:30:05.248  [2024-11-20 14:40:43.957859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.004 ms
00:30:05.248  [2024-11-20 14:40:43.957880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.248  [2024-11-20 14:40:43.957928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:30:05.248  [2024-11-20 14:40:43.961882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.248  [2024-11-20 14:40:43.964708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:30:05.248  [2024-11-20 14:40:43.964741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.920 ms
00:30:05.248  [2024-11-20 14:40:43.964760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.248  [2024-11-20 14:40:43.965268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.248  [2024-11-20 14:40:43.965316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:30:05.248  [2024-11-20 14:40:43.965350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.445 ms
00:30:05.248  [2024-11-20 14:40:43.965367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.248  [2024-11-20 14:40:43.976790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.248  [2024-11-20 14:40:43.976866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:30:05.248  [2024-11-20 14:40:43.976887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 11.386 ms
00:30:05.248  [2024-11-20 14:40:43.976900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.248  [2024-11-20 14:40:43.983671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.248  [2024-11-20 14:40:43.983877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:30:05.248  [2024-11-20 14:40:43.983918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.726 ms
00:30:05.248  [2024-11-20 14:40:43.983931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.248  [2024-11-20 14:40:44.015937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.248  [2024-11-20 14:40:44.016002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:30:05.248  [2024-11-20 14:40:44.016023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.916 ms
00:30:05.248  [2024-11-20 14:40:44.016036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.248  [2024-11-20 14:40:44.033877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.249  [2024-11-20 14:40:44.033939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:30:05.249  [2024-11-20 14:40:44.033959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.788 ms
00:30:05.249  [2024-11-20 14:40:44.033971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.249  [2024-11-20 14:40:44.035327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.249  [2024-11-20 14:40:44.035518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:30:05.249  [2024-11-20 14:40:44.035546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.307 ms
00:30:05.249  [2024-11-20 14:40:44.035560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.249  [2024-11-20 14:40:44.067803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.249  [2024-11-20 14:40:44.067866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:30:05.249  [2024-11-20 14:40:44.067886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.178 ms
00:30:05.249  [2024-11-20 14:40:44.067898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.249  [2024-11-20 14:40:44.099294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.249  [2024-11-20 14:40:44.099356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:30:05.249  [2024-11-20 14:40:44.099392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.342 ms
00:30:05.249  [2024-11-20 14:40:44.099405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.249  [2024-11-20 14:40:44.130497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.249  [2024-11-20 14:40:44.130716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:30:05.249  [2024-11-20 14:40:44.130746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.029 ms
00:30:05.249  [2024-11-20 14:40:44.130759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.249  [2024-11-20 14:40:44.161873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.249  [2024-11-20 14:40:44.162068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:30:05.249  [2024-11-20 14:40:44.162098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.015 ms
00:30:05.249  [2024-11-20 14:40:44.162113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.249  [2024-11-20 14:40:44.162160] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:30:05.249  [2024-11-20 14:40:44.162183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:30:05.249  [2024-11-20 14:40:44.162198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:     1536 / 261120 	wr_cnt: 1	state: open
00:30:05.249  [2024-11-20 14:40:44.162211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.162824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.249  [2024-11-20 14:40:44.163600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:30:05.250  [2024-11-20 14:40:44.163993] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:30:05.250  [2024-11-20 14:40:44.164005] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         016b7f42-3b0a-4c4c-8075-b33620d527e3
00:30:05.250  [2024-11-20 14:40:44.164017] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    262656
00:30:05.250  [2024-11-20 14:40:44.164028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        134592
00:30:05.250  [2024-11-20 14:40:44.164038] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         132608
00:30:05.250  [2024-11-20 14:40:44.164058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 1.0150
00:30:05.250  [2024-11-20 14:40:44.164069] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:30:05.250  [2024-11-20 14:40:44.164080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:30:05.250  [2024-11-20 14:40:44.164091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:30:05.250  [2024-11-20 14:40:44.164115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:30:05.250  [2024-11-20 14:40:44.164125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:30:05.250  [2024-11-20 14:40:44.164138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.250  [2024-11-20 14:40:44.164149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:30:05.250  [2024-11-20 14:40:44.164161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.979 ms
00:30:05.250  [2024-11-20 14:40:44.164173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.250  [2024-11-20 14:40:44.181087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.250  [2024-11-20 14:40:44.181248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:30:05.250  [2024-11-20 14:40:44.181277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.862 ms
00:30:05.250  [2024-11-20 14:40:44.181297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.250  [2024-11-20 14:40:44.181775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:05.250  [2024-11-20 14:40:44.181801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:30:05.250  [2024-11-20 14:40:44.181817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.442 ms
00:30:05.250  [2024-11-20 14:40:44.181829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.250  [2024-11-20 14:40:44.225680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.250  [2024-11-20 14:40:44.225745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:05.250  [2024-11-20 14:40:44.225777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.250  [2024-11-20 14:40:44.225799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.250  [2024-11-20 14:40:44.225881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.250  [2024-11-20 14:40:44.225907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:05.250  [2024-11-20 14:40:44.225930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.250  [2024-11-20 14:40:44.225947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.250  [2024-11-20 14:40:44.226044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.250  [2024-11-20 14:40:44.226065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:05.250  [2024-11-20 14:40:44.226086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.250  [2024-11-20 14:40:44.226108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.250  [2024-11-20 14:40:44.226138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.250  [2024-11-20 14:40:44.226151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:05.250  [2024-11-20 14:40:44.226163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.250  [2024-11-20 14:40:44.226174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.332487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.332767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:05.509  [2024-11-20 14:40:44.332805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.332826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.418469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.418547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:05.509  [2024-11-20 14:40:44.418591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.418609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.418716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.418744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:05.509  [2024-11-20 14:40:44.418758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.418769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.418819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.418834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:05.509  [2024-11-20 14:40:44.418847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.418858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.418980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.419006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:05.509  [2024-11-20 14:40:44.419025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.419037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.419091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.419109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:30:05.509  [2024-11-20 14:40:44.419121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.419132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.419176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.419191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:05.509  [2024-11-20 14:40:44.419204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.419231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.419299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:05.509  [2024-11-20 14:40:44.419318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:05.509  [2024-11-20 14:40:44.419330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:05.509  [2024-11-20 14:40:44.419342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:05.509  [2024-11-20 14:40:44.419511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 461.755 ms, result 0
00:30:06.442  
00:30:06.442  
00:30:06.442   14:40:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:30:08.972  /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK
00:30:08.972   14:40:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:30:08.972  [2024-11-20 14:40:47.741859] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:30:08.972  [2024-11-20 14:40:47.742216] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83043 ]
00:30:08.972  [2024-11-20 14:40:47.922284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:09.229  [2024-11-20 14:40:48.031148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:09.488  [2024-11-20 14:40:48.384488] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:30:09.488  [2024-11-20 14:40:48.384608] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1
00:30:09.747  [2024-11-20 14:40:48.548242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.548527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Check configuration
00:30:09.747  [2024-11-20 14:40:48.548599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:30:09.747  [2024-11-20 14:40:48.548618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.548714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.548734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:09.747  [2024-11-20 14:40:48.548752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.055 ms
00:30:09.747  [2024-11-20 14:40:48.548763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.548797] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache
00:30:09.747  [2024-11-20 14:40:48.549808] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device
00:30:09.747  [2024-11-20 14:40:48.549854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.549869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:09.747  [2024-11-20 14:40:48.549882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.065 ms
00:30:09.747  [2024-11-20 14:40:48.549894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.551112] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0
00:30:09.747  [2024-11-20 14:40:48.567795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.567888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Load super block
00:30:09.747  [2024-11-20 14:40:48.567911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.679 ms
00:30:09.747  [2024-11-20 14:40:48.567924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.568069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.568090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Validate super block
00:30:09.747  [2024-11-20 14:40:48.568104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.045 ms
00:30:09.747  [2024-11-20 14:40:48.568115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.573305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.573374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:09.747  [2024-11-20 14:40:48.573393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 5.044 ms
00:30:09.747  [2024-11-20 14:40:48.573413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.573533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.573554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:09.747  [2024-11-20 14:40:48.573595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.073 ms
00:30:09.747  [2024-11-20 14:40:48.573613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.573694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.573712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Register IO device
00:30:09.747  [2024-11-20 14:40:48.573725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.016 ms
00:30:09.747  [2024-11-20 14:40:48.573737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.573780] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread
00:30:09.747  [2024-11-20 14:40:48.578150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.578200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:09.747  [2024-11-20 14:40:48.578218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 4.385 ms
00:30:09.747  [2024-11-20 14:40:48.578235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.578282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.747  [2024-11-20 14:40:48.578296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Decorate bands
00:30:09.747  [2024-11-20 14:40:48.578309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.015 ms
00:30:09.747  [2024-11-20 14:40:48.578320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.747  [2024-11-20 14:40:48.578412] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0
00:30:09.748  [2024-11-20 14:40:48.578447] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes
00:30:09.748  [2024-11-20 14:40:48.578493] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes
00:30:09.748  [2024-11-20 14:40:48.578519] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes
00:30:09.748  [2024-11-20 14:40:48.578649] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes
00:30:09.748  [2024-11-20 14:40:48.578669] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes
00:30:09.748  [2024-11-20 14:40:48.578696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes
00:30:09.748  [2024-11-20 14:40:48.578712] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity:         103424.00 MiB
00:30:09.748  [2024-11-20 14:40:48.578727] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity:       5171.00 MiB
00:30:09.748  [2024-11-20 14:40:48.578739] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries:                    20971520
00:30:09.748  [2024-11-20 14:40:48.578750] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size:               4
00:30:09.748  [2024-11-20 14:40:48.578761] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages:           2048
00:30:09.748  [2024-11-20 14:40:48.578777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count            5
00:30:09.748  [2024-11-20 14:40:48.578790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.748  [2024-11-20 14:40:48.578802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize layout
00:30:09.748  [2024-11-20 14:40:48.578813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.382 ms
00:30:09.748  [2024-11-20 14:40:48.578825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.748  [2024-11-20 14:40:48.578926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.748  [2024-11-20 14:40:48.578943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Verify layout
00:30:09.748  [2024-11-20 14:40:48.578955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.072 ms
00:30:09.748  [2024-11-20 14:40:48.578966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.748  [2024-11-20 14:40:48.579101] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout:
00:30:09.748  [2024-11-20 14:40:48.579129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb
00:30:09.748  [2024-11-20 14:40:48.579143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p
00:30:09.748  [2024-11-20 14:40:48.579177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      80.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md
00:30:09.748  [2024-11-20 14:40:48.579208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:30:09.748  [2024-11-20 14:40:48.579229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror
00:30:09.748  [2024-11-20 14:40:48.579239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      80.62 MiB
00:30:09.748  [2024-11-20 14:40:48.579249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.50 MiB
00:30:09.748  [2024-11-20 14:40:48.579259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md
00:30:09.748  [2024-11-20 14:40:48.579271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.88 MiB
00:30:09.748  [2024-11-20 14:40:48.579294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror
00:30:09.748  [2024-11-20 14:40:48.579315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      114.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0
00:30:09.748  [2024-11-20 14:40:48.579345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      81.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1
00:30:09.748  [2024-11-20 14:40:48.579375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      89.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2
00:30:09.748  [2024-11-20 14:40:48.579406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      97.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3
00:30:09.748  [2024-11-20 14:40:48.579453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      105.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      8.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md
00:30:09.748  [2024-11-20 14:40:48.579486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:30:09.748  [2024-11-20 14:40:48.579505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror
00:30:09.748  [2024-11-20 14:40:48.579516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.38 MiB
00:30:09.748  [2024-11-20 14:40:48.579526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.25 MiB
00:30:09.748  [2024-11-20 14:40:48.579537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log
00:30:09.748  [2024-11-20 14:40:48.579548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.62 MiB
00:30:09.748  [2024-11-20 14:40:48.579558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror
00:30:09.748  [2024-11-20 14:40:48.579594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      113.75 MiB
00:30:09.748  [2024-11-20 14:40:48.579615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579625] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout:
00:30:09.748  [2024-11-20 14:40:48.579636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror
00:30:09.748  [2024-11-20 14:40:48.579647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      0.12 MiB
00:30:09.748  [2024-11-20 14:40:48.579670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap
00:30:09.748  [2024-11-20 14:40:48.579687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      102400.25 MiB
00:30:09.748  [2024-11-20 14:40:48.579697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      3.38 MiB
00:30:09.748  [2024-11-20 14:40:48.579708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm
00:30:09.748  [2024-11-20 14:40:48.579718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] 	offset:                      0.25 MiB
00:30:09.748  [2024-11-20 14:40:48.579728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] 	blocks:                      102400.00 MiB
00:30:09.748  [2024-11-20 14:40:48.579740] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc:
00:30:09.748  [2024-11-20 14:40:48.579755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:09.748  [2024-11-20 14:40:48.579767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000
00:30:09.748  [2024-11-20 14:40:48.579779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80
00:30:09.748  [2024-11-20 14:40:48.579789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80
00:30:09.748  [2024-11-20 14:40:48.579800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800
00:30:09.748  [2024-11-20 14:40:48.579811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800
00:30:09.748  [2024-11-20 14:40:48.579821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800
00:30:09.748  [2024-11-20 14:40:48.579833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800
00:30:09.748  [2024-11-20 14:40:48.579844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40
00:30:09.749  [2024-11-20 14:40:48.579856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40
00:30:09.749  [2024-11-20 14:40:48.579867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20
00:30:09.749  [2024-11-20 14:40:48.579878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20
00:30:09.749  [2024-11-20 14:40:48.579889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20
00:30:09.749  [2024-11-20 14:40:48.579900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20
00:30:09.749  [2024-11-20 14:40:48.579912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0
00:30:09.749  [2024-11-20 14:40:48.579923] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev:
00:30:09.749  [2024-11-20 14:40:48.579942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:09.749  [2024-11-20 14:40:48.579955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:30:09.749  [2024-11-20 14:40:48.579967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000
00:30:09.749  [2024-11-20 14:40:48.579978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360
00:30:09.749  [2024-11-20 14:40:48.579989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60
00:30:09.749  [2024-11-20 14:40:48.580001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.580013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Layout upgrade
00:30:09.749  [2024-11-20 14:40:48.580025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.971 ms
00:30:09.749  [2024-11-20 14:40:48.580035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.613748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.613952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:09.749  [2024-11-20 14:40:48.613987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.645 ms
00:30:09.749  [2024-11-20 14:40:48.613999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.614129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.614145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize band addresses
00:30:09.749  [2024-11-20 14:40:48.614163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.067 ms
00:30:09.749  [2024-11-20 14:40:48.614174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.665438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.665501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:09.749  [2024-11-20 14:40:48.665522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 51.165 ms
00:30:09.749  [2024-11-20 14:40:48.665534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.665637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.665659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:09.749  [2024-11-20 14:40:48.665679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.005 ms
00:30:09.749  [2024-11-20 14:40:48.665690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.666105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.666125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:09.749  [2024-11-20 14:40:48.666138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.304 ms
00:30:09.749  [2024-11-20 14:40:48.666149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.666309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.666329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:09.749  [2024-11-20 14:40:48.666342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.128 ms
00:30:09.749  [2024-11-20 14:40:48.666360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.683408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.683490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:09.749  [2024-11-20 14:40:48.683529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.016 ms
00:30:09.749  [2024-11-20 14:40:48.683548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:09.749  [2024-11-20 14:40:48.700246] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2
00:30:09.749  [2024-11-20 14:40:48.700438] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully
00:30:09.749  [2024-11-20 14:40:48.700466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:09.749  [2024-11-20 14:40:48.700480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore NV cache metadata
00:30:09.749  [2024-11-20 14:40:48.700494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 16.681 ms
00:30:09.749  [2024-11-20 14:40:48.700505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.730654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.730850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore valid map metadata
00:30:10.007  [2024-11-20 14:40:48.730881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 30.089 ms
00:30:10.007  [2024-11-20 14:40:48.730895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.746976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.747032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore band info metadata
00:30:10.007  [2024-11-20 14:40:48.747051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.995 ms
00:30:10.007  [2024-11-20 14:40:48.747063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.762628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.762674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore trim metadata
00:30:10.007  [2024-11-20 14:40:48.762692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.506 ms
00:30:10.007  [2024-11-20 14:40:48.762703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.763526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.763565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize P2L checkpointing
00:30:10.007  [2024-11-20 14:40:48.763599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.695 ms
00:30:10.007  [2024-11-20 14:40:48.763618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.840179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.840251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore P2L checkpoints
00:30:10.007  [2024-11-20 14:40:48.840281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 76.532 ms
00:30:10.007  [2024-11-20 14:40:48.840294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.853184] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB
00:30:10.007  [2024-11-20 14:40:48.855962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.856006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize L2P
00:30:10.007  [2024-11-20 14:40:48.856027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 15.590 ms
00:30:10.007  [2024-11-20 14:40:48.856039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.856168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.856190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Restore L2P
00:30:10.007  [2024-11-20 14:40:48.856203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.008 ms
00:30:10.007  [2024-11-20 14:40:48.856219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.856922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.856969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize band initialization
00:30:10.007  [2024-11-20 14:40:48.856986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.644 ms
00:30:10.007  [2024-11-20 14:40:48.856998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.857038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.857053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Start core poller
00:30:10.007  [2024-11-20 14:40:48.857065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.007 ms
00:30:10.007  [2024-11-20 14:40:48.857077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.857127] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped
00:30:10.007  [2024-11-20 14:40:48.857143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.857155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Self test on startup
00:30:10.007  [2024-11-20 14:40:48.857167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.018 ms
00:30:10.007  [2024-11-20 14:40:48.857178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.888960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.889148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL dirty state
00:30:10.007  [2024-11-20 14:40:48.889180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.757 ms
00:30:10.007  [2024-11-20 14:40:48.889202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.007  [2024-11-20 14:40:48.889330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:10.007  [2024-11-20 14:40:48.889353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finalize initialization
00:30:10.008  [2024-11-20 14:40:48.889366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.047 ms
00:30:10.008  [2024-11-20 14:40:48.889378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:10.008  [2024-11-20 14:40:48.890647] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 341.871 ms, result 0
00:30:11.380  
[2024-11-20T14:40:51.296Z] Copying: 22/1024 [MB] (22 MBps)
[2024-11-20T14:40:52.228Z] Copying: 49/1024 [MB] (26 MBps)
[2024-11-20T14:40:53.161Z] Copying: 76/1024 [MB] (26 MBps)
[2024-11-20T14:40:54.197Z] Copying: 104/1024 [MB] (28 MBps)
[2024-11-20T14:40:55.128Z] Copying: 132/1024 [MB] (27 MBps)
[2024-11-20T14:40:56.497Z] Copying: 162/1024 [MB] (30 MBps)
[2024-11-20T14:40:57.430Z] Copying: 189/1024 [MB] (27 MBps)
[2024-11-20T14:40:58.363Z] Copying: 217/1024 [MB] (27 MBps)
[2024-11-20T14:40:59.299Z] Copying: 247/1024 [MB] (30 MBps)
[2024-11-20T14:41:00.236Z] Copying: 275/1024 [MB] (28 MBps)
[2024-11-20T14:41:01.171Z] Copying: 304/1024 [MB] (28 MBps)
[2024-11-20T14:41:02.184Z] Copying: 331/1024 [MB] (26 MBps)
[2024-11-20T14:41:03.116Z] Copying: 360/1024 [MB] (29 MBps)
[2024-11-20T14:41:04.489Z] Copying: 388/1024 [MB] (27 MBps)
[2024-11-20T14:41:05.423Z] Copying: 415/1024 [MB] (27 MBps)
[2024-11-20T14:41:06.357Z] Copying: 442/1024 [MB] (27 MBps)
[2024-11-20T14:41:07.291Z] Copying: 470/1024 [MB] (28 MBps)
[2024-11-20T14:41:08.225Z] Copying: 499/1024 [MB] (29 MBps)
[2024-11-20T14:41:09.159Z] Copying: 530/1024 [MB] (30 MBps)
[2024-11-20T14:41:10.530Z] Copying: 556/1024 [MB] (26 MBps)
[2024-11-20T14:41:11.463Z] Copying: 585/1024 [MB] (28 MBps)
[2024-11-20T14:41:12.398Z] Copying: 612/1024 [MB] (26 MBps)
[2024-11-20T14:41:13.331Z] Copying: 637/1024 [MB] (25 MBps)
[2024-11-20T14:41:14.267Z] Copying: 663/1024 [MB] (26 MBps)
[2024-11-20T14:41:15.208Z] Copying: 690/1024 [MB] (26 MBps)
[2024-11-20T14:41:16.158Z] Copying: 717/1024 [MB] (26 MBps)
[2024-11-20T14:41:17.532Z] Copying: 743/1024 [MB] (26 MBps)
[2024-11-20T14:41:18.191Z] Copying: 769/1024 [MB] (25 MBps)
[2024-11-20T14:41:19.125Z] Copying: 795/1024 [MB] (25 MBps)
[2024-11-20T14:41:20.497Z] Copying: 821/1024 [MB] (25 MBps)
[2024-11-20T14:41:21.432Z] Copying: 846/1024 [MB] (25 MBps)
[2024-11-20T14:41:22.366Z] Copying: 871/1024 [MB] (24 MBps)
[2024-11-20T14:41:23.302Z] Copying: 896/1024 [MB] (25 MBps)
[2024-11-20T14:41:24.237Z] Copying: 923/1024 [MB] (26 MBps)
[2024-11-20T14:41:25.170Z] Copying: 950/1024 [MB] (26 MBps)
[2024-11-20T14:41:26.545Z] Copying: 976/1024 [MB] (26 MBps)
[2024-11-20T14:41:27.111Z] Copying: 1003/1024 [MB] (27 MBps)
[2024-11-20T14:41:27.111Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-11-20 14:41:26.921619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.921705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinit core IO channel
00:30:48.129  [2024-11-20 14:41:26.921726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.019 ms
00:30:48.129  [2024-11-20 14:41:26.921739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.921773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread
00:30:48.129  [2024-11-20 14:41:26.925659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.925723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Unregister IO device
00:30:48.129  [2024-11-20 14:41:26.925755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.861 ms
00:30:48.129  [2024-11-20 14:41:26.925768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.926085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.926118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Stop core poller
00:30:48.129  [2024-11-20 14:41:26.926137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.279 ms
00:30:48.129  [2024-11-20 14:41:26.926150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.929908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.929953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist L2P
00:30:48.129  [2024-11-20 14:41:26.929970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 3.725 ms
00:30:48.129  [2024-11-20 14:41:26.929982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.936819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.936852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Finish L2P trims
00:30:48.129  [2024-11-20 14:41:26.936883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 6.801 ms
00:30:48.129  [2024-11-20 14:41:26.936894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.970064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.970117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist NV cache metadata
00:30:48.129  [2024-11-20 14:41:26.970135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 33.092 ms
00:30:48.129  [2024-11-20 14:41:26.970147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.988734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.988803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist valid map metadata
00:30:48.129  [2024-11-20 14:41:26.988838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 18.555 ms
00:30:48.129  [2024-11-20 14:41:26.988850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:26.990846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:26.990902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist P2L metadata
00:30:48.129  [2024-11-20 14:41:26.990920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.961 ms
00:30:48.129  [2024-11-20 14:41:26.990931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:27.022239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:27.022288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist band info metadata
00:30:48.129  [2024-11-20 14:41:27.022337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 31.284 ms
00:30:48.129  [2024-11-20 14:41:27.022349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:27.054841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:27.054920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist trim metadata
00:30:48.129  [2024-11-20 14:41:27.054938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.462 ms
00:30:48.129  [2024-11-20 14:41:27.054951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.129  [2024-11-20 14:41:27.087893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.129  [2024-11-20 14:41:27.088167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Persist superblock
00:30:48.129  [2024-11-20 14:41:27.088197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.910 ms
00:30:48.129  [2024-11-20 14:41:27.088210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.388  [2024-11-20 14:41:27.120480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.388  [2024-11-20 14:41:27.120724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Set FTL clean state
00:30:48.388  [2024-11-20 14:41:27.120754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 32.188 ms
00:30:48.388  [2024-11-20 14:41:27.120767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.388  [2024-11-20 14:41:27.120800] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity:
00:30:48.388  [2024-11-20 14:41:27.120823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:30:48.388  [2024-11-20 14:41:27.120847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   2:     1536 / 261120 	wr_cnt: 1	state: open
00:30:48.388  [2024-11-20 14:41:27.120860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   3:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.120991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.388  [2024-11-20 14:41:27.121004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  19:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  20:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  21:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  22:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  23:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  24:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  25:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  26:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  27:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  28:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  29:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  30:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  31:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  32:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  33:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  34:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  35:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  36:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  37:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  38:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  39:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  40:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  41:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  42:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  43:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  44:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  45:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  46:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  47:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  48:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  49:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  50:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  51:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  52:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  53:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  54:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  55:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  56:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  57:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  58:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  59:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  60:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  61:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  62:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  63:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  64:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  65:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  66:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  67:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  68:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  69:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  70:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  71:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  72:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  73:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  74:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  75:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  76:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  77:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  78:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  79:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  80:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  81:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  82:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  83:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  84:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  85:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  86:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  87:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  88:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  89:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  90:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  91:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.121972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  92:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  93:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  94:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  95:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  96:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  97:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  98:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band  99:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0]  Band 100:        0 / 261120 	wr_cnt: 0	state: free
00:30:48.389  [2024-11-20 14:41:27.122142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 
00:30:48.389  [2024-11-20 14:41:27.122157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID:         016b7f42-3b0a-4c4c-8075-b33620d527e3
00:30:48.389  [2024-11-20 14:41:27.122167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs:    262656
00:30:48.389  [2024-11-20 14:41:27.122177] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes:        960
00:30:48.390  [2024-11-20 14:41:27.122186] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes:         0
00:30:48.390  [2024-11-20 14:41:27.122196] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF:                 inf
00:30:48.390  [2024-11-20 14:41:27.122206] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits:
00:30:48.390  [2024-11-20 14:41:27.122216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   crit: 0
00:30:48.390  [2024-11-20 14:41:27.122237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]   high: 0
00:30:48.390  [2024-11-20 14:41:27.122247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]    low: 0
00:30:48.390  [2024-11-20 14:41:27.122255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0]  start: 0
00:30:48.390  [2024-11-20 14:41:27.122265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.390  [2024-11-20 14:41:27.122275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Dump statistics
00:30:48.390  [2024-11-20 14:41:27.122286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 1.467 ms
00:30:48.390  [2024-11-20 14:41:27.122297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.139499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.390  [2024-11-20 14:41:27.139551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize L2P
00:30:48.390  [2024-11-20 14:41:27.139589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 17.153 ms
00:30:48.390  [2024-11-20 14:41:27.139605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.140055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action
00:30:48.390  [2024-11-20 14:41:27.140084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Deinitialize P2L checkpointing
00:30:48.390  [2024-11-20 14:41:27.140108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.421 ms
00:30:48.390  [2024-11-20 14:41:27.140119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.184380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.390  [2024-11-20 14:41:27.184434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize reloc
00:30:48.390  [2024-11-20 14:41:27.184466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.390  [2024-11-20 14:41:27.184477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.184544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.390  [2024-11-20 14:41:27.184558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands metadata
00:30:48.390  [2024-11-20 14:41:27.184575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.390  [2024-11-20 14:41:27.184634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.184748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.390  [2024-11-20 14:41:27.184779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize trim map
00:30:48.390  [2024-11-20 14:41:27.184792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.390  [2024-11-20 14:41:27.184803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.184826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.390  [2024-11-20 14:41:27.184839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize valid map
00:30:48.390  [2024-11-20 14:41:27.184851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.390  [2024-11-20 14:41:27.184868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.390  [2024-11-20 14:41:27.291532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.390  [2024-11-20 14:41:27.291657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize NV cache
00:30:48.390  [2024-11-20 14:41:27.291680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.390  [2024-11-20 14:41:27.291698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.376876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.376948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize metadata
00:30:48.648  [2024-11-20 14:41:27.376977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.376989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.377109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize core IO channel
00:30:48.648  [2024-11-20 14:41:27.377122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.377133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.377195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize bands
00:30:48.648  [2024-11-20 14:41:27.377206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.377218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.377373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize memory pools
00:30:48.648  [2024-11-20 14:41:27.377385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.377396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.377468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Initialize superblock
00:30:48.648  [2024-11-20 14:41:27.377480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.377491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.377558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open cache bdev
00:30:48.648  [2024-11-20 14:41:27.377598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.377614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback
00:30:48.648  [2024-11-20 14:41:27.377686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] 	 name:     Open base bdev
00:30:48.648  [2024-11-20 14:41:27.377697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] 	 duration: 0.000 ms
00:30:48.648  [2024-11-20 14:41:27.377709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] 	 status:   0
00:30:48.648  [2024-11-20 14:41:27.377855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 456.236 ms, result 0
00:30:49.582  
00:30:49.582  
00:30:49.582   14:41:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5
00:30:52.189  /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK
00:30:52.189   14:41:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT
00:30:52.189   14:41:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill
00:30:52.189   14:41:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json
00:30:52.189   14:41:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile
00:30:52.189   14:41:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5
00:30:52.189  Process with pid 81281 is not found
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81281
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81281 ']'
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81281
00:30:52.189  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81281) - No such process
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81281 is not found'
00:30:52.189   14:41:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd
00:30:52.447  Remove shared memory files
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f
00:30:52.447  ************************************
00:30:52.447  END TEST ftl_dirty_shutdown
00:30:52.447  ************************************
00:30:52.447  
00:30:52.447  real	3m38.992s
00:30:52.447  user	4m11.445s
00:30:52.447  sys	0m38.527s
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:52.447   14:41:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x
00:30:52.447   14:41:31 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0
00:30:52.447   14:41:31 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:30:52.447   14:41:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:52.447   14:41:31 ftl -- common/autotest_common.sh@10 -- # set +x
00:30:52.447  ************************************
00:30:52.447  START TEST ftl_upgrade_shutdown
00:30:52.447  ************************************
00:30:52.447   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0
00:30:52.707  * Looking for test storage...
00:30:52.707  * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:30:52.707     14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:30:52.707     14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:30:52.707    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:30:52.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:52.708  		--rc genhtml_branch_coverage=1
00:30:52.708  		--rc genhtml_function_coverage=1
00:30:52.708  		--rc genhtml_legend=1
00:30:52.708  		--rc geninfo_all_blocks=1
00:30:52.708  		--rc geninfo_unexecuted_blocks=1
00:30:52.708  		
00:30:52.708  		'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:30:52.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:52.708  		--rc genhtml_branch_coverage=1
00:30:52.708  		--rc genhtml_function_coverage=1
00:30:52.708  		--rc genhtml_legend=1
00:30:52.708  		--rc geninfo_all_blocks=1
00:30:52.708  		--rc geninfo_unexecuted_blocks=1
00:30:52.708  		
00:30:52.708  		'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:30:52.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:52.708  		--rc genhtml_branch_coverage=1
00:30:52.708  		--rc genhtml_function_coverage=1
00:30:52.708  		--rc genhtml_legend=1
00:30:52.708  		--rc geninfo_all_blocks=1
00:30:52.708  		--rc geninfo_unexecuted_blocks=1
00:30:52.708  		
00:30:52.708  		'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:30:52.708  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:52.708  		--rc genhtml_branch_coverage=1
00:30:52.708  		--rc genhtml_function_coverage=1
00:30:52.708  		--rc genhtml_legend=1
00:30:52.708  		--rc geninfo_all_blocks=1
00:30:52.708  		--rc geninfo_unexecuted_blocks=1
00:30:52.708  		
00:30:52.708  		'
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh
00:30:52.708      14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl
00:30:52.708     14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../..
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid=
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid=
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]'
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid=
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid=
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:30:52.708    14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:30:52.708   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83540
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]'
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83540
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83540 ']'
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:52.709  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:52.709   14:41:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:30:52.967  [2024-11-20 14:41:31.725982] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:30:52.967  [2024-11-20 14:41:31.726159] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83540 ]
00:30:52.967  [2024-11-20 14:41:31.914454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:30:53.225  [2024-11-20 14:41:32.049300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT')
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]]
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]]
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]]
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]]
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]]
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}"
00:30:54.159   14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]]
00:30:54.159    14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480
00:30:54.159    14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base
00:30:54.159    14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0
00:30:54.159    14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480
00:30:54.159    14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev
00:30:54.159     14:41:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0
00:30:54.416    14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1
00:30:54.416    14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size
00:30:54.416     14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1
00:30:54.416     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1
00:30:54.416     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:30:54.416     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:30:54.416     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:30:54.416      14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1
00:30:54.674     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:30:54.674    {
00:30:54.674      "name": "basen1",
00:30:54.674      "aliases": [
00:30:54.674        "3dda5cd1-c483-4f94-8a67-50fb60b996eb"
00:30:54.674      ],
00:30:54.674      "product_name": "NVMe disk",
00:30:54.674      "block_size": 4096,
00:30:54.674      "num_blocks": 1310720,
00:30:54.674      "uuid": "3dda5cd1-c483-4f94-8a67-50fb60b996eb",
00:30:54.674      "numa_id": -1,
00:30:54.674      "assigned_rate_limits": {
00:30:54.674        "rw_ios_per_sec": 0,
00:30:54.674        "rw_mbytes_per_sec": 0,
00:30:54.674        "r_mbytes_per_sec": 0,
00:30:54.674        "w_mbytes_per_sec": 0
00:30:54.674      },
00:30:54.674      "claimed": true,
00:30:54.674      "claim_type": "read_many_write_one",
00:30:54.674      "zoned": false,
00:30:54.674      "supported_io_types": {
00:30:54.674        "read": true,
00:30:54.674        "write": true,
00:30:54.674        "unmap": true,
00:30:54.674        "flush": true,
00:30:54.674        "reset": true,
00:30:54.674        "nvme_admin": true,
00:30:54.674        "nvme_io": true,
00:30:54.674        "nvme_io_md": false,
00:30:54.674        "write_zeroes": true,
00:30:54.674        "zcopy": false,
00:30:54.674        "get_zone_info": false,
00:30:54.674        "zone_management": false,
00:30:54.674        "zone_append": false,
00:30:54.674        "compare": true,
00:30:54.674        "compare_and_write": false,
00:30:54.674        "abort": true,
00:30:54.674        "seek_hole": false,
00:30:54.674        "seek_data": false,
00:30:54.674        "copy": true,
00:30:54.674        "nvme_iov_md": false
00:30:54.674      },
00:30:54.674      "driver_specific": {
00:30:54.674        "nvme": [
00:30:54.674          {
00:30:54.674            "pci_address": "0000:00:11.0",
00:30:54.674            "trid": {
00:30:54.674              "trtype": "PCIe",
00:30:54.674              "traddr": "0000:00:11.0"
00:30:54.674            },
00:30:54.674            "ctrlr_data": {
00:30:54.674              "cntlid": 0,
00:30:54.674              "vendor_id": "0x1b36",
00:30:54.674              "model_number": "QEMU NVMe Ctrl",
00:30:54.674              "serial_number": "12341",
00:30:54.674              "firmware_revision": "8.0.0",
00:30:54.674              "subnqn": "nqn.2019-08.org.qemu:12341",
00:30:54.674              "oacs": {
00:30:54.674                "security": 0,
00:30:54.674                "format": 1,
00:30:54.674                "firmware": 0,
00:30:54.674                "ns_manage": 1
00:30:54.674              },
00:30:54.674              "multi_ctrlr": false,
00:30:54.674              "ana_reporting": false
00:30:54.674            },
00:30:54.674            "vs": {
00:30:54.674              "nvme_version": "1.4"
00:30:54.674            },
00:30:54.674            "ns_data": {
00:30:54.674              "id": 1,
00:30:54.674              "can_share": false
00:30:54.674            }
00:30:54.674          }
00:30:54.674        ],
00:30:54.674        "mp_policy": "active_passive"
00:30:54.674      }
00:30:54.674    }
00:30:54.674  ]'
00:30:54.674      14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:30:54.674     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:30:54.674      14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:30:54.933     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720
00:30:54.933     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120
00:30:54.933     14:41:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120
00:30:54.933    14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120
00:30:54.933    14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]]
00:30:54.933    14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols
00:30:54.933     14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:30:54.933     14:41:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:30:55.190    14:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=d27189a1-926c-49cd-b173-4d37645cd761
00:30:55.190    14:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores
00:30:55.190    14:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d27189a1-926c-49cd-b173-4d37645cd761
00:30:55.448     14:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs
00:30:56.013    14:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=8a588347-7bb2-4e5a-9127-464ce4c9fc90
00:30:56.013    14:41:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 8a588347-7bb2-4e5a-9127-464ce4c9fc90
00:30:56.272   14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930
00:30:56.272   14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930 ]]
00:30:56.272    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930 5120
00:30:56.272    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache
00:30:56.272    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0
00:30:56.272    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930
00:30:56.272    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120
00:30:56.272     14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930
00:30:56.272     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930
00:30:56.272     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info
00:30:56.272     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs
00:30:56.272     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb
00:30:56.272      14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930
00:30:56.531     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[
00:30:56.531    {
00:30:56.531      "name": "a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930",
00:30:56.531      "aliases": [
00:30:56.531        "lvs/basen1p0"
00:30:56.531      ],
00:30:56.531      "product_name": "Logical Volume",
00:30:56.531      "block_size": 4096,
00:30:56.531      "num_blocks": 5242880,
00:30:56.531      "uuid": "a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930",
00:30:56.531      "assigned_rate_limits": {
00:30:56.531        "rw_ios_per_sec": 0,
00:30:56.531        "rw_mbytes_per_sec": 0,
00:30:56.531        "r_mbytes_per_sec": 0,
00:30:56.531        "w_mbytes_per_sec": 0
00:30:56.531      },
00:30:56.531      "claimed": false,
00:30:56.531      "zoned": false,
00:30:56.531      "supported_io_types": {
00:30:56.531        "read": true,
00:30:56.531        "write": true,
00:30:56.531        "unmap": true,
00:30:56.531        "flush": false,
00:30:56.531        "reset": true,
00:30:56.531        "nvme_admin": false,
00:30:56.531        "nvme_io": false,
00:30:56.531        "nvme_io_md": false,
00:30:56.531        "write_zeroes": true,
00:30:56.531        "zcopy": false,
00:30:56.531        "get_zone_info": false,
00:30:56.531        "zone_management": false,
00:30:56.531        "zone_append": false,
00:30:56.531        "compare": false,
00:30:56.531        "compare_and_write": false,
00:30:56.531        "abort": false,
00:30:56.531        "seek_hole": true,
00:30:56.531        "seek_data": true,
00:30:56.531        "copy": false,
00:30:56.531        "nvme_iov_md": false
00:30:56.531      },
00:30:56.531      "driver_specific": {
00:30:56.531        "lvol": {
00:30:56.531          "lvol_store_uuid": "8a588347-7bb2-4e5a-9127-464ce4c9fc90",
00:30:56.531          "base_bdev": "basen1",
00:30:56.531          "thin_provision": true,
00:30:56.531          "num_allocated_clusters": 0,
00:30:56.531          "snapshot": false,
00:30:56.531          "clone": false,
00:30:56.531          "esnap_clone": false
00:30:56.531        }
00:30:56.531      }
00:30:56.531    }
00:30:56.531  ]'
00:30:56.531      14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:30:56.531     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096
00:30:56.531      14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:30:56.531     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880
00:30:56.531     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480
00:30:56.531     14:41:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480
00:30:56.531    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024
00:30:56.531    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev
00:30:56.531     14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0
00:30:57.098    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1
00:30:57.098    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]]
00:30:57.098    14:41:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1
00:30:57.385   14:41:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0
00:30:57.385   14:41:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]]
00:30:57.385   14:41:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d a44b46e4-ee6c-4eb9-b6a3-19aa3bd2b930 -c cachen1p0 --l2p_dram_limit 2
00:30:57.667  [2024-11-20 14:41:36.449560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.449637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:30:57.667  [2024-11-20 14:41:36.449663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:30:57.667  [2024-11-20 14:41:36.449677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.449781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.449801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:30:57.667  [2024-11-20 14:41:36.449818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.073 ms
00:30:57.667  [2024-11-20 14:41:36.449830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.449863] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:30:57.667  [2024-11-20 14:41:36.451533] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:30:57.667  [2024-11-20 14:41:36.451595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.451614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:30:57.667  [2024-11-20 14:41:36.451630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.734 ms
00:30:57.667  [2024-11-20 14:41:36.451643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.451866] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 0cdf9912-e7ac-42d1-8302-7706ba869860
00:30:57.667  [2024-11-20 14:41:36.452940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.452987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Default-initialize superblock
00:30:57.667  [2024-11-20 14:41:36.453006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.024 ms
00:30:57.667  [2024-11-20 14:41:36.453020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.457605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.457679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:30:57.667  [2024-11-20 14:41:36.457697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.512 ms
00:30:57.667  [2024-11-20 14:41:36.457712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.457784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.457807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:30:57.667  [2024-11-20 14:41:36.457822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.028 ms
00:30:57.667  [2024-11-20 14:41:36.457855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.457929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.457951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:30:57.667  [2024-11-20 14:41:36.457968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.013 ms
00:30:57.667  [2024-11-20 14:41:36.457982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.458015] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:30:57.667  [2024-11-20 14:41:36.462555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.462606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:30:57.667  [2024-11-20 14:41:36.462628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.544 ms
00:30:57.667  [2024-11-20 14:41:36.462640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.462680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.462697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:30:57.667  [2024-11-20 14:41:36.462713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.005 ms
00:30:57.667  [2024-11-20 14:41:36.462725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.462789] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1
00:30:57.667  [2024-11-20 14:41:36.462954] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:30:57.667  [2024-11-20 14:41:36.462979] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:30:57.667  [2024-11-20 14:41:36.462996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:30:57.667  [2024-11-20 14:41:36.463013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:30:57.667  [2024-11-20 14:41:36.463028] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:30:57.667  [2024-11-20 14:41:36.463043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:30:57.667  [2024-11-20 14:41:36.463058] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:30:57.667  [2024-11-20 14:41:36.463072] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:30:57.667  [2024-11-20 14:41:36.463083] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:30:57.667  [2024-11-20 14:41:36.463098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.463110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:30:57.667  [2024-11-20 14:41:36.463137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.313 ms
00:30:57.667  [2024-11-20 14:41:36.463150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.463253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.667  [2024-11-20 14:41:36.463269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:30:57.667  [2024-11-20 14:41:36.463284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.070 ms
00:30:57.667  [2024-11-20 14:41:36.463309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.667  [2024-11-20 14:41:36.463458] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:30:57.668  [2024-11-20 14:41:36.463480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:30:57.668  [2024-11-20 14:41:36.463496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:30:57.668  [2024-11-20 14:41:36.463516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:30:57.668  [2024-11-20 14:41:36.463542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:30:57.668  [2024-11-20 14:41:36.463567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:30:57.668  [2024-11-20 14:41:36.463609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:30:57.668  [2024-11-20 14:41:36.463621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:30:57.668  [2024-11-20 14:41:36.463662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:30:57.668  [2024-11-20 14:41:36.463676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:30:57.668  [2024-11-20 14:41:36.463702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:30:57.668  [2024-11-20 14:41:36.463713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:30:57.668  [2024-11-20 14:41:36.463742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:30:57.668  [2024-11-20 14:41:36.463756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:30:57.668  [2024-11-20 14:41:36.463781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:30:57.668  [2024-11-20 14:41:36.463792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:57.668  [2024-11-20 14:41:36.463805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:30:57.668  [2024-11-20 14:41:36.463816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:30:57.668  [2024-11-20 14:41:36.463829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:57.668  [2024-11-20 14:41:36.463840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:30:57.668  [2024-11-20 14:41:36.463853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:30:57.668  [2024-11-20 14:41:36.463864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:57.668  [2024-11-20 14:41:36.463877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:30:57.668  [2024-11-20 14:41:36.463888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:30:57.668  [2024-11-20 14:41:36.463901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:30:57.668  [2024-11-20 14:41:36.463912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:30:57.668  [2024-11-20 14:41:36.463928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:30:57.668  [2024-11-20 14:41:36.463939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:30:57.668  [2024-11-20 14:41:36.463964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:30:57.668  [2024-11-20 14:41:36.463977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.463988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:30:57.668  [2024-11-20 14:41:36.464001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:30:57.668  [2024-11-20 14:41:36.464012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.464026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:30:57.668  [2024-11-20 14:41:36.464037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:30:57.668  [2024-11-20 14:41:36.464055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.464065] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:30:57.668  [2024-11-20 14:41:36.464081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:30:57.668  [2024-11-20 14:41:36.464093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:30:57.668  [2024-11-20 14:41:36.464107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:30:57.668  [2024-11-20 14:41:36.464119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:30:57.668  [2024-11-20 14:41:36.464135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:30:57.668  [2024-11-20 14:41:36.464146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:30:57.668  [2024-11-20 14:41:36.464161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:30:57.668  [2024-11-20 14:41:36.464172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:30:57.668  [2024-11-20 14:41:36.464186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:30:57.668  [2024-11-20 14:41:36.464202] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:30:57.668  [2024-11-20 14:41:36.464222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:30:57.668  [2024-11-20 14:41:36.464251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:30:57.668  [2024-11-20 14:41:36.464289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:30:57.668  [2024-11-20 14:41:36.464303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:30:57.668  [2024-11-20 14:41:36.464315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:30:57.668  [2024-11-20 14:41:36.464329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:30:57.668  [2024-11-20 14:41:36.464425] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:30:57.668  [2024-11-20 14:41:36.464441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:30:57.668  [2024-11-20 14:41:36.464469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:30:57.668  [2024-11-20 14:41:36.464482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:30:57.668  [2024-11-20 14:41:36.464496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:30:57.668  [2024-11-20 14:41:36.464509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:57.668  [2024-11-20 14:41:36.464524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:30:57.668  [2024-11-20 14:41:36.464537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.137 ms
00:30:57.668  [2024-11-20 14:41:36.464550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:57.668  [2024-11-20 14:41:36.464620] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while.
00:30:57.668  [2024-11-20 14:41:36.464646] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks
00:30:59.567  [2024-11-20 14:41:38.518250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.567  [2024-11-20 14:41:38.518333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Scrub NV cache
00:30:59.567  [2024-11-20 14:41:38.518356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2053.642 ms
00:30:59.567  [2024-11-20 14:41:38.518372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.551944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.552030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:30:59.825  [2024-11-20 14:41:38.552059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.225 ms
00:30:59.825  [2024-11-20 14:41:38.552079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.552246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.552282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:30:59.825  [2024-11-20 14:41:38.552307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.019 ms
00:30:59.825  [2024-11-20 14:41:38.552325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.594502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.594779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:30:59.825  [2024-11-20 14:41:38.594812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 42.115 ms
00:30:59.825  [2024-11-20 14:41:38.594830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.594902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.594921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:30:59.825  [2024-11-20 14:41:38.594935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:30:59.825  [2024-11-20 14:41:38.594949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.595379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.595410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:30:59.825  [2024-11-20 14:41:38.595441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.320 ms
00:30:59.825  [2024-11-20 14:41:38.595463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.595546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.595592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:30:59.825  [2024-11-20 14:41:38.595620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.033 ms
00:30:59.825  [2024-11-20 14:41:38.595642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.613630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.613888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:30:59.825  [2024-11-20 14:41:38.613921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.952 ms
00:30:59.825  [2024-11-20 14:41:38.613939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.640530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:30:59.825  [2024-11-20 14:41:38.641518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.641559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:30:59.825  [2024-11-20 14:41:38.641602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 27.421 ms
00:30:59.825  [2024-11-20 14:41:38.641618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.667680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.667917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear L2P
00:30:59.825  [2024-11-20 14:41:38.667956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 25.986 ms
00:30:59.825  [2024-11-20 14:41:38.667972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.668115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.668140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:30:59.825  [2024-11-20 14:41:38.668165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.063 ms
00:30:59.825  [2024-11-20 14:41:38.668178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.825  [2024-11-20 14:41:38.700937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.825  [2024-11-20 14:41:38.701011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Save initial band info metadata
00:30:59.825  [2024-11-20 14:41:38.701037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 32.657 ms
00:30:59.825  [2024-11-20 14:41:38.701050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.826  [2024-11-20 14:41:38.733444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.826  [2024-11-20 14:41:38.733740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Save initial chunk info metadata
00:30:59.826  [2024-11-20 14:41:38.733779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 32.312 ms
00:30:59.826  [2024-11-20 14:41:38.733794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:30:59.826  [2024-11-20 14:41:38.734555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:30:59.826  [2024-11-20 14:41:38.734598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:30:59.826  [2024-11-20 14:41:38.734623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.692 ms
00:30:59.826  [2024-11-20 14:41:38.734636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.821715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:00.084  [2024-11-20 14:41:38.821791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Wipe P2L region
00:31:00.084  [2024-11-20 14:41:38.821821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 86.982 ms
00:31:00.084  [2024-11-20 14:41:38.821835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.855746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:00.084  [2024-11-20 14:41:38.856038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear trim map
00:31:00.084  [2024-11-20 14:41:38.856093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.759 ms
00:31:00.084  [2024-11-20 14:41:38.856108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.889755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:00.084  [2024-11-20 14:41:38.889991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Clear trim log
00:31:00.084  [2024-11-20 14:41:38.890028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 33.557 ms
00:31:00.084  [2024-11-20 14:41:38.890043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.923009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:00.084  [2024-11-20 14:41:38.923079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL dirty state
00:31:00.084  [2024-11-20 14:41:38.923104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 32.894 ms
00:31:00.084  [2024-11-20 14:41:38.923117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.923190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:00.084  [2024-11-20 14:41:38.923213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:31:00.084  [2024-11-20 14:41:38.923233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.009 ms
00:31:00.084  [2024-11-20 14:41:38.923245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.923383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:00.084  [2024-11-20 14:41:38.923406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:31:00.084  [2024-11-20 14:41:38.923434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.046 ms
00:31:00.084  [2024-11-20 14:41:38.923449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:00.084  [2024-11-20 14:41:38.924679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2474.555 ms, result 0
00:31:00.084  {
00:31:00.084    "name": "ftl",
00:31:00.084    "uuid": "0cdf9912-e7ac-42d1-8302-7706ba869860"
00:31:00.084  }
00:31:00.084   14:41:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP
00:31:00.343  [2024-11-20 14:41:39.203862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:00.343   14:41:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1
00:31:00.602   14:41:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl
00:31:01.168  [2024-11-20 14:41:39.860714] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:31:01.168   14:41:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1
00:31:01.425  [2024-11-20 14:41:40.210513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:31:01.426   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:31:01.992  Fill FTL, iteration 1
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=()
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 ))
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1'
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]]
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83667
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock
00:31:01.992   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83667 /var/tmp/spdk.tgt.sock
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83667 ']'
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...'
00:31:01.993  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:01.993   14:41:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:01.993  [2024-11-20 14:41:40.829017] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:01.993  [2024-11-20 14:41:40.829317] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83667 ]
00:31:02.251  [2024-11-20 14:41:41.008601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:02.251  [2024-11-20 14:41:41.144967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:03.185   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:03.185   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:31:03.185   14:41:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0
00:31:03.443  ftln1
00:31:03.443   14:41:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": ['
00:31:03.443   14:41:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}'
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83667
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83667 ']'
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83667
00:31:04.010    14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:04.010    14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83667
00:31:04.010  killing process with pid 83667
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83667'
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83667
00:31:04.010   14:41:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83667
00:31:05.910   14:41:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid
00:31:05.910   14:41:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0
00:31:06.168  [2024-11-20 14:41:44.979215] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:06.168  [2024-11-20 14:41:44.979660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83720 ]
00:31:06.427  [2024-11-20 14:41:45.167961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:06.427  [2024-11-20 14:41:45.303197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:08.359  
[2024-11-20T14:41:47.908Z] Copying: 188/1024 [MB] (188 MBps)
[2024-11-20T14:41:48.842Z] Copying: 383/1024 [MB] (195 MBps)
[2024-11-20T14:41:50.215Z] Copying: 586/1024 [MB] (203 MBps)
[2024-11-20T14:41:51.150Z] Copying: 790/1024 [MB] (204 MBps)
[2024-11-20T14:41:51.150Z] Copying: 1001/1024 [MB] (211 MBps)
[2024-11-20T14:41:52.084Z] Copying: 1024/1024 [MB] (average 200 MBps)
00:31:13.102  
00:31:13.102  Calculate MD5 checksum, iteration 1
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1'
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:13.102   14:41:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:13.102  [2024-11-20 14:41:52.019933] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:13.102  [2024-11-20 14:41:52.020078] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83790 ]
00:31:13.361  [2024-11-20 14:41:52.198959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:13.361  [2024-11-20 14:41:52.323151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:15.262  
[2024-11-20T14:41:54.811Z] Copying: 505/1024 [MB] (505 MBps)
[2024-11-20T14:41:54.811Z] Copying: 1009/1024 [MB] (504 MBps)
[2024-11-20T14:41:55.746Z] Copying: 1024/1024 [MB] (average 504 MBps)
00:31:16.764  
00:31:16.764   14:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024
00:31:16.764   14:41:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:19.298    14:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d '
00:31:19.298  Fill FTL, iteration 2
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=04c57546b366e418d8401b1412f8c555
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ ))
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2'
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:19.298   14:41:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024
00:31:19.298  [2024-11-20 14:41:57.959714] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:19.298  [2024-11-20 14:41:57.960098] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83852 ]
00:31:19.298  [2024-11-20 14:41:58.147749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:19.298  [2024-11-20 14:41:58.271739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:21.201  
[2024-11-20T14:42:00.749Z] Copying: 200/1024 [MB] (200 MBps)
[2024-11-20T14:42:02.126Z] Copying: 404/1024 [MB] (204 MBps)
[2024-11-20T14:42:03.063Z] Copying: 610/1024 [MB] (206 MBps)
[2024-11-20T14:42:03.999Z] Copying: 804/1024 [MB] (194 MBps)
[2024-11-20T14:42:03.999Z] Copying: 1005/1024 [MB] (201 MBps)
[2024-11-20T14:42:04.934Z] Copying: 1024/1024 [MB] (average 200 MBps)
00:31:25.952  
00:31:25.952  Calculate MD5 checksum, iteration 2
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2'
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:25.952   14:42:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:31:25.952  [2024-11-20 14:42:04.912554] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:25.952  [2024-11-20 14:42:04.912756] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83922 ]
00:31:26.211  [2024-11-20 14:42:05.093533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:26.211  [2024-11-20 14:42:05.186012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:28.114  
[2024-11-20T14:42:08.058Z] Copying: 470/1024 [MB] (470 MBps)
[2024-11-20T14:42:08.058Z] Copying: 944/1024 [MB] (474 MBps)
[2024-11-20T14:42:09.435Z] Copying: 1024/1024 [MB] (average 473 MBps)
00:31:30.453  
00:31:30.453   14:42:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048
00:31:30.453   14:42:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:31:32.982    14:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d '
00:31:32.982   14:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b1b5e2ceace26bd1f777d7bb2e0fd3b3
00:31:32.982   14:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ ))
00:31:32.982   14:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations ))
00:31:32.982   14:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:31:32.982  [2024-11-20 14:42:11.781684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:32.982  [2024-11-20 14:42:11.781762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:32.982  [2024-11-20 14:42:11.781800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.008 ms
00:31:32.982  [2024-11-20 14:42:11.781811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:32.982  [2024-11-20 14:42:11.781846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:32.982  [2024-11-20 14:42:11.781869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:32.982  [2024-11-20 14:42:11.781881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:32.982  [2024-11-20 14:42:11.781891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:32.982  [2024-11-20 14:42:11.781918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:32.982  [2024-11-20 14:42:11.781932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:32.982  [2024-11-20 14:42:11.781943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:32.982  [2024-11-20 14:42:11.781953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:32.982  [2024-11-20 14:42:11.782044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.330 ms, result 0
00:31:32.982  true
00:31:32.982   14:42:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:33.240  {
00:31:33.240    "name": "ftl",
00:31:33.240    "properties": [
00:31:33.240      {
00:31:33.240        "name": "superblock_version",
00:31:33.240        "value": 5,
00:31:33.240        "read-only": true
00:31:33.240      },
00:31:33.240      {
00:31:33.240        "name": "base_device",
00:31:33.240        "bands": [
00:31:33.240          {
00:31:33.240            "id": 0,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 1,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 2,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 3,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 4,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 5,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 6,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 7,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 8,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 9,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 10,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 11,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 12,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 13,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 14,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 15,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 16,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          },
00:31:33.240          {
00:31:33.240            "id": 17,
00:31:33.240            "state": "FREE",
00:31:33.240            "validity": 0.0
00:31:33.240          }
00:31:33.240        ],
00:31:33.241        "read-only": true
00:31:33.241      },
00:31:33.241      {
00:31:33.241        "name": "cache_device",
00:31:33.241        "type": "bdev",
00:31:33.241        "chunks": [
00:31:33.241          {
00:31:33.241            "id": 0,
00:31:33.241            "state": "INACTIVE",
00:31:33.241            "utilization": 0.0
00:31:33.241          },
00:31:33.241          {
00:31:33.241            "id": 1,
00:31:33.241            "state": "CLOSED",
00:31:33.241            "utilization": 1.0
00:31:33.241          },
00:31:33.241          {
00:31:33.241            "id": 2,
00:31:33.241            "state": "CLOSED",
00:31:33.241            "utilization": 1.0
00:31:33.241          },
00:31:33.241          {
00:31:33.241            "id": 3,
00:31:33.241            "state": "OPEN",
00:31:33.241            "utilization": 0.001953125
00:31:33.241          },
00:31:33.241          {
00:31:33.241            "id": 4,
00:31:33.241            "state": "OPEN",
00:31:33.241            "utilization": 0.0
00:31:33.241          }
00:31:33.241        ],
00:31:33.241        "read-only": true
00:31:33.241      },
00:31:33.241      {
00:31:33.241        "name": "verbose_mode",
00:31:33.241        "value": true,
00:31:33.241        "unit": "",
00:31:33.241        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:31:33.241      },
00:31:33.241      {
00:31:33.241        "name": "prep_upgrade_on_shutdown",
00:31:33.241        "value": false,
00:31:33.241        "unit": "",
00:31:33.241        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:31:33.241      }
00:31:33.241    ]
00:31:33.241  }
00:31:33.241   14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true
00:31:33.500  [2024-11-20 14:42:12.418496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:33.500  [2024-11-20 14:42:12.418802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:33.500  [2024-11-20 14:42:12.418936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.009 ms
00:31:33.500  [2024-11-20 14:42:12.419089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:33.500  [2024-11-20 14:42:12.419179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:33.500  [2024-11-20 14:42:12.419311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:33.500  [2024-11-20 14:42:12.419438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:33.500  [2024-11-20 14:42:12.419519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:33.500  [2024-11-20 14:42:12.419720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:33.500  [2024-11-20 14:42:12.419791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:33.500  [2024-11-20 14:42:12.420010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:33.500  [2024-11-20 14:42:12.420066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:33.500  [2024-11-20 14:42:12.420305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 1.804 ms, result 0
00:31:33.500  true
00:31:33.500    14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties
00:31:33.500    14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length'
00:31:33.500    14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:33.759   14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3
00:31:33.759   14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]]
00:31:33.759   14:42:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:31:34.018  [2024-11-20 14:42:12.983374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:34.018  [2024-11-20 14:42:12.983458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:34.018  [2024-11-20 14:42:12.983480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.008 ms
00:31:34.018  [2024-11-20 14:42:12.983492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:34.018  [2024-11-20 14:42:12.983529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:34.018  [2024-11-20 14:42:12.983545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:34.018  [2024-11-20 14:42:12.983558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:34.018  [2024-11-20 14:42:12.983589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:34.018  [2024-11-20 14:42:12.983625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:34.018  [2024-11-20 14:42:12.983641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:34.018  [2024-11-20 14:42:12.983654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:34.018  [2024-11-20 14:42:12.983665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:34.018  [2024-11-20 14:42:12.983743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.367 ms, result 0
00:31:34.018  true
00:31:34.277   14:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:34.535  {
00:31:34.535    "name": "ftl",
00:31:34.535    "properties": [
00:31:34.535      {
00:31:34.535        "name": "superblock_version",
00:31:34.535        "value": 5,
00:31:34.535        "read-only": true
00:31:34.535      },
00:31:34.535      {
00:31:34.535        "name": "base_device",
00:31:34.535        "bands": [
00:31:34.535          {
00:31:34.535            "id": 0,
00:31:34.535            "state": "FREE",
00:31:34.535            "validity": 0.0
00:31:34.535          },
00:31:34.535          {
00:31:34.535            "id": 1,
00:31:34.535            "state": "FREE",
00:31:34.535            "validity": 0.0
00:31:34.535          },
00:31:34.535          {
00:31:34.535            "id": 2,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 3,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 4,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 5,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 6,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 7,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 8,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 9,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 10,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 11,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 12,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 13,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 14,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 15,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 16,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 17,
00:31:34.536            "state": "FREE",
00:31:34.536            "validity": 0.0
00:31:34.536          }
00:31:34.536        ],
00:31:34.536        "read-only": true
00:31:34.536      },
00:31:34.536      {
00:31:34.536        "name": "cache_device",
00:31:34.536        "type": "bdev",
00:31:34.536        "chunks": [
00:31:34.536          {
00:31:34.536            "id": 0,
00:31:34.536            "state": "INACTIVE",
00:31:34.536            "utilization": 0.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 1,
00:31:34.536            "state": "CLOSED",
00:31:34.536            "utilization": 1.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 2,
00:31:34.536            "state": "CLOSED",
00:31:34.536            "utilization": 1.0
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 3,
00:31:34.536            "state": "OPEN",
00:31:34.536            "utilization": 0.001953125
00:31:34.536          },
00:31:34.536          {
00:31:34.536            "id": 4,
00:31:34.536            "state": "OPEN",
00:31:34.536            "utilization": 0.0
00:31:34.536          }
00:31:34.536        ],
00:31:34.536        "read-only": true
00:31:34.536      },
00:31:34.536      {
00:31:34.536        "name": "verbose_mode",
00:31:34.536        "value": true,
00:31:34.536        "unit": "",
00:31:34.536        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:31:34.536      },
00:31:34.536      {
00:31:34.536        "name": "prep_upgrade_on_shutdown",
00:31:34.536        "value": true,
00:31:34.536        "unit": "",
00:31:34.536        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:31:34.536      }
00:31:34.536    ]
00:31:34.536  }
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83540 ]]
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83540
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83540 ']'
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83540
00:31:34.536    14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:34.536    14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83540
00:31:34.536  killing process with pid 83540
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83540'
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83540
00:31:34.536   14:42:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83540
00:31:35.471  [2024-11-20 14:42:14.351507] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000
00:31:35.471  [2024-11-20 14:42:14.369165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:35.471  [2024-11-20 14:42:14.369220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinit core IO channel
00:31:35.471  [2024-11-20 14:42:14.369242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:35.471  [2024-11-20 14:42:14.369254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:35.471  [2024-11-20 14:42:14.369287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread
00:31:35.471  [2024-11-20 14:42:14.372769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:35.471  [2024-11-20 14:42:14.372806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Unregister IO device
00:31:35.471  [2024-11-20 14:42:14.372822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3.459 ms
00:31:35.471  [2024-11-20 14:42:14.372841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.943545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.943650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Stop core poller
00:31:45.454  [2024-11-20 14:42:23.943681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 9570.733 ms
00:31:45.454  [2024-11-20 14:42:23.943693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.945054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.945097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist L2P
00:31:45.454  [2024-11-20 14:42:23.945115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.334 ms
00:31:45.454  [2024-11-20 14:42:23.945127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.946431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.946464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finish L2P trims
00:31:45.454  [2024-11-20 14:42:23.946501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.262 ms
00:31:45.454  [2024-11-20 14:42:23.946523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.958917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.958955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist NV cache metadata
00:31:45.454  [2024-11-20 14:42:23.958987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.289 ms
00:31:45.454  [2024-11-20 14:42:23.959013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.967052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.967093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist valid map metadata
00:31:45.454  [2024-11-20 14:42:23.967124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 7.986 ms
00:31:45.454  [2024-11-20 14:42:23.967134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.967226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.967251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist P2L metadata
00:31:45.454  [2024-11-20 14:42:23.967263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.052 ms
00:31:45.454  [2024-11-20 14:42:23.967273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.978981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.979031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist band info metadata
00:31:45.454  [2024-11-20 14:42:23.979061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.689 ms
00:31:45.454  [2024-11-20 14:42:23.979071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:23.990684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:23.990718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist trim metadata
00:31:45.454  [2024-11-20 14:42:23.990748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.575 ms
00:31:45.454  [2024-11-20 14:42:23.990758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:24.002147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:24.002180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist superblock
00:31:45.454  [2024-11-20 14:42:24.002210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.351 ms
00:31:45.454  [2024-11-20 14:42:24.002219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:24.013944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.454  [2024-11-20 14:42:24.014166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL clean state
00:31:45.454  [2024-11-20 14:42:24.014302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.654 ms
00:31:45.454  [2024-11-20 14:42:24.014421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.454  [2024-11-20 14:42:24.014482] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity:
00:31:45.454  [2024-11-20 14:42:24.014506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:45.454  [2024-11-20 14:42:24.014520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   2:   261120 / 261120 	wr_cnt: 1	state: closed
00:31:45.454  [2024-11-20 14:42:24.014548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   3:     2048 / 261120 	wr_cnt: 1	state: closed
00:31:45.454  [2024-11-20 14:42:24.014560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.454  [2024-11-20 14:42:24.014738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.455  [2024-11-20 14:42:24.014749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.455  [2024-11-20 14:42:24.014760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.455  [2024-11-20 14:42:24.014771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.455  [2024-11-20 14:42:24.014782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.455  [2024-11-20 14:42:24.014793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:31:45.455  [2024-11-20 14:42:24.014808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 
00:31:45.455  [2024-11-20 14:42:24.014818] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID:         0cdf9912-e7ac-42d1-8302-7706ba869860
00:31:45.455  [2024-11-20 14:42:24.014830] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs:    524288
00:31:45.455  [2024-11-20 14:42:24.014840] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes:        786752
00:31:45.455  [2024-11-20 14:42:24.014850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes:         524288
00:31:45.455  [2024-11-20 14:42:24.014861] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF:                 1.5006
00:31:45.455  [2024-11-20 14:42:24.014878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits:
00:31:45.455  [2024-11-20 14:42:24.014893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   crit: 0
00:31:45.455  [2024-11-20 14:42:24.014907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   high: 0
00:31:45.455  [2024-11-20 14:42:24.014917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]    low: 0
00:31:45.455  [2024-11-20 14:42:24.014926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]  start: 0
00:31:45.455  [2024-11-20 14:42:24.014937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.455  [2024-11-20 14:42:24.014948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Dump statistics
00:31:45.455  [2024-11-20 14:42:24.014959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.456 ms
00:31:45.455  [2024-11-20 14:42:24.014971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.030935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.455  [2024-11-20 14:42:24.031109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize L2P
00:31:45.455  [2024-11-20 14:42:24.031244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 15.910 ms
00:31:45.455  [2024-11-20 14:42:24.031292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.031877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:45.455  [2024-11-20 14:42:24.032068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize P2L checkpointing
00:31:45.455  [2024-11-20 14:42:24.032189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.469 ms
00:31:45.455  [2024-11-20 14:42:24.032285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.082067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.082278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:45.455  [2024-11-20 14:42:24.082387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.082486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.082585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.082714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:45.455  [2024-11-20 14:42:24.082764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.082800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.083001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.083129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:45.455  [2024-11-20 14:42:24.083246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.083371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.083466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.083518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:45.455  [2024-11-20 14:42:24.083650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.083710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.177605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.177882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:45.455  [2024-11-20 14:42:24.178007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.178076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.260192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.260470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:45.455  [2024-11-20 14:42:24.260501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.260514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.260723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.260748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:45.455  [2024-11-20 14:42:24.260762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.260782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.260875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.260894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:45.455  [2024-11-20 14:42:24.260908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.260919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.261111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.261128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:45.455  [2024-11-20 14:42:24.261139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.261149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.261204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.261220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize superblock
00:31:45.455  [2024-11-20 14:42:24.261231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.261241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.261282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.261296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:45.455  [2024-11-20 14:42:24.261307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.261317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.261371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:31:45.455  [2024-11-20 14:42:24.261386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:45.455  [2024-11-20 14:42:24.261397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:31:45.455  [2024-11-20 14:42:24.261407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:45.455  [2024-11-20 14:42:24.261537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9892.416 ms, result 0
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84146
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84146
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84146 ']'
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:48.741  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:48.741   14:42:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:31:48.741  [2024-11-20 14:42:27.586144] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:48.741  [2024-11-20 14:42:27.586320] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84146 ]
00:31:48.999  [2024-11-20 14:42:27.783343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:48.999  [2024-11-20 14:42:27.907226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:49.934  [2024-11-20 14:42:28.759508] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:31:49.934  [2024-11-20 14:42:28.759615] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:31:49.934  [2024-11-20 14:42:28.909063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.934  [2024-11-20 14:42:28.909141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:31:49.934  [2024-11-20 14:42:28.909163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:31:49.934  [2024-11-20 14:42:28.909176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.934  [2024-11-20 14:42:28.909272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.934  [2024-11-20 14:42:28.909294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:31:49.934  [2024-11-20 14:42:28.909308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.065 ms
00:31:49.934  [2024-11-20 14:42:28.909320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.934  [2024-11-20 14:42:28.909366] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:31:49.934  [2024-11-20 14:42:28.910384] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:31:49.934  [2024-11-20 14:42:28.910438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:49.934  [2024-11-20 14:42:28.910452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:31:49.934  [2024-11-20 14:42:28.910466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.089 ms
00:31:49.934  [2024-11-20 14:42:28.910478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:49.934  [2024-11-20 14:42:28.911742] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0
00:31:50.194  [2024-11-20 14:42:28.928768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.928843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Load super block
00:31:50.194  [2024-11-20 14:42:28.928876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 17.024 ms
00:31:50.194  [2024-11-20 14:42:28.928889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.929005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.929026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Validate super block
00:31:50.194  [2024-11-20 14:42:28.929041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.038 ms
00:31:50.194  [2024-11-20 14:42:28.929052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.933565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.933625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:31:50.194  [2024-11-20 14:42:28.933642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.373 ms
00:31:50.194  [2024-11-20 14:42:28.933654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.933752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.933773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:31:50.194  [2024-11-20 14:42:28.933787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.059 ms
00:31:50.194  [2024-11-20 14:42:28.933800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.933876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.933894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:31:50.194  [2024-11-20 14:42:28.933913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.012 ms
00:31:50.194  [2024-11-20 14:42:28.933925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.933964] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:31:50.194  [2024-11-20 14:42:28.938300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.938339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:31:50.194  [2024-11-20 14:42:28.938356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 4.346 ms
00:31:50.194  [2024-11-20 14:42:28.938374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.938412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.938428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:31:50.194  [2024-11-20 14:42:28.938441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:31:50.194  [2024-11-20 14:42:28.938452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.938506] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0
00:31:50.194  [2024-11-20 14:42:28.938538] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes
00:31:50.194  [2024-11-20 14:42:28.938605] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes
00:31:50.194  [2024-11-20 14:42:28.938631] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes
00:31:50.194  [2024-11-20 14:42:28.938745] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:31:50.194  [2024-11-20 14:42:28.938766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:31:50.194  [2024-11-20 14:42:28.938782] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:31:50.194  [2024-11-20 14:42:28.938798] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:31:50.194  [2024-11-20 14:42:28.938811] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:31:50.194  [2024-11-20 14:42:28.938830] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:31:50.194  [2024-11-20 14:42:28.938842] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:31:50.194  [2024-11-20 14:42:28.938854] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:31:50.194  [2024-11-20 14:42:28.938865] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:31:50.194  [2024-11-20 14:42:28.938879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.938890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:31:50.194  [2024-11-20 14:42:28.938902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.377 ms
00:31:50.194  [2024-11-20 14:42:28.938914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.939012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.194  [2024-11-20 14:42:28.939032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:31:50.194  [2024-11-20 14:42:28.939045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.071 ms
00:31:50.194  [2024-11-20 14:42:28.939063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.194  [2024-11-20 14:42:28.939214] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:31:50.194  [2024-11-20 14:42:28.939233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:31:50.194  [2024-11-20 14:42:28.939246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:50.194  [2024-11-20 14:42:28.939259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.194  [2024-11-20 14:42:28.939271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:31:50.194  [2024-11-20 14:42:28.939281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:31:50.194  [2024-11-20 14:42:28.939293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:31:50.194  [2024-11-20 14:42:28.939310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:31:50.194  [2024-11-20 14:42:28.939321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:31:50.194  [2024-11-20 14:42:28.939332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.194  [2024-11-20 14:42:28.939343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:31:50.194  [2024-11-20 14:42:28.939354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:31:50.194  [2024-11-20 14:42:28.939364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.194  [2024-11-20 14:42:28.939375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:31:50.194  [2024-11-20 14:42:28.939386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:31:50.194  [2024-11-20 14:42:28.939396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.194  [2024-11-20 14:42:28.939407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:31:50.194  [2024-11-20 14:42:28.939418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:31:50.194  [2024-11-20 14:42:28.939441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.194  [2024-11-20 14:42:28.939453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:31:50.195  [2024-11-20 14:42:28.939464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:31:50.195  [2024-11-20 14:42:28.939475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:31:50.195  [2024-11-20 14:42:28.939497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:31:50.195  [2024-11-20 14:42:28.939507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:31:50.195  [2024-11-20 14:42:28.939544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:31:50.195  [2024-11-20 14:42:28.939555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:31:50.195  [2024-11-20 14:42:28.939601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:31:50.195  [2024-11-20 14:42:28.939613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:31:50.195  [2024-11-20 14:42:28.939635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:31:50.195  [2024-11-20 14:42:28.939646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.195  [2024-11-20 14:42:28.939658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:31:50.195  [2024-11-20 14:42:28.939669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.195  [2024-11-20 14:42:28.939691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:31:50.195  [2024-11-20 14:42:28.939701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:31:50.195  [2024-11-20 14:42:28.939726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.195  [2024-11-20 14:42:28.939738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:31:50.195  [2024-11-20 14:42:28.939749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:31:50.195  [2024-11-20 14:42:28.939759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.195  [2024-11-20 14:42:28.939770] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:31:50.195  [2024-11-20 14:42:28.939782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:31:50.195  [2024-11-20 14:42:28.939794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:31:50.195  [2024-11-20 14:42:28.939824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:31:50.195  [2024-11-20 14:42:28.939835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:31:50.195  [2024-11-20 14:42:28.939846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:31:50.195  [2024-11-20 14:42:28.939857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:31:50.195  [2024-11-20 14:42:28.939868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:31:50.195  [2024-11-20 14:42:28.939879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:31:50.195  [2024-11-20 14:42:28.939892] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:31:50.195  [2024-11-20 14:42:28.939906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.939920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:31:50.195  [2024-11-20 14:42:28.939933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.939945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.939956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:31:50.195  [2024-11-20 14:42:28.939968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:31:50.195  [2024-11-20 14:42:28.939980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:31:50.195  [2024-11-20 14:42:28.939992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:31:50.195  [2024-11-20 14:42:28.940004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:31:50.195  [2024-11-20 14:42:28.940087] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:31:50.195  [2024-11-20 14:42:28.940100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:31:50.195  [2024-11-20 14:42:28.940128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:31:50.195  [2024-11-20 14:42:28.940139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:31:50.195  [2024-11-20 14:42:28.940152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:31:50.195  [2024-11-20 14:42:28.940165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:50.195  [2024-11-20 14:42:28.940177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:31:50.195  [2024-11-20 14:42:28.940189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.022 ms
00:31:50.195  [2024-11-20 14:42:28.940201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:50.195  [2024-11-20 14:42:28.940264] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while.
00:31:50.195  [2024-11-20 14:42:28.940283] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks
00:31:52.730  [2024-11-20 14:42:31.295991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.730  [2024-11-20 14:42:31.296286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Scrub NV cache
00:31:52.730  [2024-11-20 14:42:31.296408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2355.743 ms
00:31:52.730  [2024-11-20 14:42:31.296457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.325724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.326051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:31:52.731  [2024-11-20 14:42:31.326175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 28.913 ms
00:31:52.731  [2024-11-20 14:42:31.326224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.326547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.326629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:31:52.731  [2024-11-20 14:42:31.326865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.015 ms
00:31:52.731  [2024-11-20 14:42:31.326913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.364716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.364977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:31:52.731  [2024-11-20 14:42:31.365005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 37.661 ms
00:31:52.731  [2024-11-20 14:42:31.365024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.365091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.365107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:31:52.731  [2024-11-20 14:42:31.365118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:31:52.731  [2024-11-20 14:42:31.365128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.365520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.365538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:31:52.731  [2024-11-20 14:42:31.365549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.315 ms
00:31:52.731  [2024-11-20 14:42:31.365559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.365652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.365669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:31:52.731  [2024-11-20 14:42:31.365680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.059 ms
00:31:52.731  [2024-11-20 14:42:31.365691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.381789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.381833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:31:52.731  [2024-11-20 14:42:31.381865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 16.070 ms
00:31:52.731  [2024-11-20 14:42:31.381876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.405888] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4
00:31:52.731  [2024-11-20 14:42:31.406110] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully
00:31:52.731  [2024-11-20 14:42:31.406134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.406147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore NV cache metadata
00:31:52.731  [2024-11-20 14:42:31.406161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 24.117 ms
00:31:52.731  [2024-11-20 14:42:31.406172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.422761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.422805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore valid map metadata
00:31:52.731  [2024-11-20 14:42:31.422840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 16.539 ms
00:31:52.731  [2024-11-20 14:42:31.422852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.437540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.437604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore band info metadata
00:31:52.731  [2024-11-20 14:42:31.437637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.620 ms
00:31:52.731  [2024-11-20 14:42:31.437648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.452041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.452215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore trim metadata
00:31:52.731  [2024-11-20 14:42:31.452255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 14.332 ms
00:31:52.731  [2024-11-20 14:42:31.452266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.453178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.453208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:31:52.731  [2024-11-20 14:42:31.453237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.790 ms
00:31:52.731  [2024-11-20 14:42:31.453247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.522939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.523019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore P2L checkpoints
00:31:52.731  [2024-11-20 14:42:31.523054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 69.664 ms
00:31:52.731  [2024-11-20 14:42:31.523066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.534616] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:31:52.731  [2024-11-20 14:42:31.535272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.535297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:31:52.731  [2024-11-20 14:42:31.535312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.133 ms
00:31:52.731  [2024-11-20 14:42:31.535323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.535433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.535487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore L2P
00:31:52.731  [2024-11-20 14:42:31.535500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.007 ms
00:31:52.731  [2024-11-20 14:42:31.535511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.535588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.535662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:31:52.731  [2024-11-20 14:42:31.535678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.020 ms
00:31:52.731  [2024-11-20 14:42:31.535689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.535728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.535758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:31:52.731  [2024-11-20 14:42:31.535791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.007 ms
00:31:52.731  [2024-11-20 14:42:31.535802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.535845] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped
00:31:52.731  [2024-11-20 14:42:31.535862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.535887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Self test on startup
00:31:52.731  [2024-11-20 14:42:31.535898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.018 ms
00:31:52.731  [2024-11-20 14:42:31.535908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.562303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.562348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL dirty state
00:31:52.731  [2024-11-20 14:42:31.562379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 26.365 ms
00:31:52.731  [2024-11-20 14:42:31.562390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.562469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:52.731  [2024-11-20 14:42:31.562486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:31:52.731  [2024-11-20 14:42:31.562498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.037 ms
00:31:52.731  [2024-11-20 14:42:31.562507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:52.731  [2024-11-20 14:42:31.563942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2654.303 ms, result 0
00:31:52.731  [2024-11-20 14:42:31.578725] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:52.731  [2024-11-20 14:42:31.594745] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:31:52.731  [2024-11-20 14:42:31.602934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:31:53.299   14:42:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:53.299   14:42:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:31:53.299   14:42:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:31:53.299   14:42:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0
00:31:53.299   14:42:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true
00:31:53.866  [2024-11-20 14:42:32.555968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:53.866  [2024-11-20 14:42:32.556233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decode property
00:31:53.866  [2024-11-20 14:42:32.556264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.008 ms
00:31:53.866  [2024-11-20 14:42:32.556285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:53.866  [2024-11-20 14:42:32.556332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:53.866  [2024-11-20 14:42:32.556348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set property
00:31:53.866  [2024-11-20 14:42:32.556359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.003 ms
00:31:53.866  [2024-11-20 14:42:32.556386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:53.866  [2024-11-20 14:42:32.556446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:31:53.866  [2024-11-20 14:42:32.556461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Property setting cleanup
00:31:53.866  [2024-11-20 14:42:32.556473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:31:53.866  [2024-11-20 14:42:32.556484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:31:53.866  [2024-11-20 14:42:32.556569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.600 ms, result 0
00:31:53.866  true
00:31:53.866   14:42:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:53.866  {
00:31:53.866    "name": "ftl",
00:31:53.866    "properties": [
00:31:53.866      {
00:31:53.866        "name": "superblock_version",
00:31:53.866        "value": 5,
00:31:53.866        "read-only": true
00:31:53.866      },
00:31:53.866      {
00:31:53.866        "name": "base_device",
00:31:53.866        "bands": [
00:31:53.866          {
00:31:53.866            "id": 0,
00:31:53.866            "state": "CLOSED",
00:31:53.866            "validity": 1.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 1,
00:31:53.866            "state": "CLOSED",
00:31:53.866            "validity": 1.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 2,
00:31:53.866            "state": "CLOSED",
00:31:53.866            "validity": 0.007843137254901933
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 3,
00:31:53.866            "state": "FREE",
00:31:53.866            "validity": 0.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 4,
00:31:53.866            "state": "FREE",
00:31:53.866            "validity": 0.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 5,
00:31:53.866            "state": "FREE",
00:31:53.866            "validity": 0.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 6,
00:31:53.866            "state": "FREE",
00:31:53.866            "validity": 0.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 7,
00:31:53.866            "state": "FREE",
00:31:53.866            "validity": 0.0
00:31:53.866          },
00:31:53.866          {
00:31:53.866            "id": 8,
00:31:53.866            "state": "FREE",
00:31:53.866            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 9,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 10,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 11,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 12,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 13,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 14,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 15,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 16,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 17,
00:31:53.867            "state": "FREE",
00:31:53.867            "validity": 0.0
00:31:53.867          }
00:31:53.867        ],
00:31:53.867        "read-only": true
00:31:53.867      },
00:31:53.867      {
00:31:53.867        "name": "cache_device",
00:31:53.867        "type": "bdev",
00:31:53.867        "chunks": [
00:31:53.867          {
00:31:53.867            "id": 0,
00:31:53.867            "state": "INACTIVE",
00:31:53.867            "utilization": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 1,
00:31:53.867            "state": "OPEN",
00:31:53.867            "utilization": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 2,
00:31:53.867            "state": "OPEN",
00:31:53.867            "utilization": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 3,
00:31:53.867            "state": "FREE",
00:31:53.867            "utilization": 0.0
00:31:53.867          },
00:31:53.867          {
00:31:53.867            "id": 4,
00:31:53.867            "state": "FREE",
00:31:53.867            "utilization": 0.0
00:31:53.867          }
00:31:53.867        ],
00:31:53.867        "read-only": true
00:31:53.867      },
00:31:53.867      {
00:31:53.867        "name": "verbose_mode",
00:31:53.867        "value": true,
00:31:53.867        "unit": "",
00:31:53.867        "desc": "In verbose mode, user is able to get access to additional advanced FTL properties"
00:31:53.867      },
00:31:53.867      {
00:31:53.867        "name": "prep_upgrade_on_shutdown",
00:31:53.867        "value": false,
00:31:53.867        "unit": "",
00:31:53.867        "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version"
00:31:53.867      }
00:31:53.867    ]
00:31:53.867  }
00:31:53.867    14:42:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties
00:31:53.867    14:42:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length'
00:31:53.867    14:42:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:54.435   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0
00:31:54.435   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]]
00:31:54.435    14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties
00:31:54.435    14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length'
00:31:54.435    14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl
00:31:54.694  Validate MD5 checksum, iteration 1
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]]
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 ))
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1'
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:31:54.694   14:42:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:31:54.694  [2024-11-20 14:42:33.557551] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:31:54.694  [2024-11-20 14:42:33.557951] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84225 ]
00:31:54.953  [2024-11-20 14:42:33.746936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:54.953  [2024-11-20 14:42:33.873062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:56.857  
[2024-11-20T14:42:36.775Z] Copying: 449/1024 [MB] (449 MBps)
[2024-11-20T14:42:36.775Z] Copying: 927/1024 [MB] (478 MBps)
[2024-11-20T14:42:38.150Z] Copying: 1024/1024 [MB] (average 457 MBps)
00:31:59.168  
00:31:59.168   14:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024
00:31:59.168   14:42:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:01.704    14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:32:01.704  Validate MD5 checksum, iteration 2
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=04c57546b366e418d8401b1412f8c555
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 04c57546b366e418d8401b1412f8c555 != \0\4\c\5\7\5\4\6\b\3\6\6\e\4\1\8\d\8\4\0\1\b\1\4\1\2\f\8\c\5\5\5 ]]
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2'
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:01.704   14:42:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:01.704  [2024-11-20 14:42:40.350388] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:32:01.704  [2024-11-20 14:42:40.350565] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84307 ]
00:32:01.704  [2024-11-20 14:42:40.533252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:01.704  [2024-11-20 14:42:40.662932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:03.604  
[2024-11-20T14:42:43.520Z] Copying: 424/1024 [MB] (424 MBps)
[2024-11-20T14:42:43.779Z] Copying: 913/1024 [MB] (489 MBps)
[2024-11-20T14:42:45.681Z] Copying: 1024/1024 [MB] (average 459 MBps)
00:32:06.699  
00:32:06.699   14:42:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048
00:32:06.699   14:42:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:08.595    14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b1b5e2ceace26bd1f777d7bb2e0fd3b3
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b1b5e2ceace26bd1f777d7bb2e0fd3b3 != \b\1\b\5\e\2\c\e\a\c\e\2\6\b\d\1\f\7\7\7\d\7\b\b\2\e\0\f\d\3\b\3 ]]
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84146 ]]
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84146
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev=
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev=
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:32:08.595   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84383
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84383
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84383 ']'
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:08.596  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:08.596   14:42:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:32:08.853  [2024-11-20 14:42:47.623755] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:32:08.853  [2024-11-20 14:42:47.624169] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84383 ]
00:32:08.853  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84146 Killed                  $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg"
00:32:08.853  [2024-11-20 14:42:47.807284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:09.111  [2024-11-20 14:42:47.911539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:32:10.043  [2024-11-20 14:42:48.769625] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:32:10.043  [2024-11-20 14:42:48.769879] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1
00:32:10.043  [2024-11-20 14:42:48.917831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.918081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Check configuration
00:32:10.043  [2024-11-20 14:42:48.918224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:32:10.043  [2024-11-20 14:42:48.918276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.918398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.918488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:32:10.043  [2024-11-20 14:42:48.918550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.057 ms
00:32:10.043  [2024-11-20 14:42:48.918567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.918636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache
00:32:10.043  [2024-11-20 14:42:48.919606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device
00:32:10.043  [2024-11-20 14:42:48.919635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.919647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:32:10.043  [2024-11-20 14:42:48.919660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.009 ms
00:32:10.043  [2024-11-20 14:42:48.919671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.920196] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0
00:32:10.043  [2024-11-20 14:42:48.941053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.941109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Load super block
00:32:10.043  [2024-11-20 14:42:48.941143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 20.859 ms
00:32:10.043  [2024-11-20 14:42:48.941155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.953267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.953308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Validate super block
00:32:10.043  [2024-11-20 14:42:48.953345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.025 ms
00:32:10.043  [2024-11-20 14:42:48.953357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.953898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.953926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:32:10.043  [2024-11-20 14:42:48.953941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.437 ms
00:32:10.043  [2024-11-20 14:42:48.953952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.954051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.954071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:32:10.043  [2024-11-20 14:42:48.954083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.071 ms
00:32:10.043  [2024-11-20 14:42:48.954111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.954149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.954166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Register IO device
00:32:10.043  [2024-11-20 14:42:48.954178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.008 ms
00:32:10.043  [2024-11-20 14:42:48.954189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.954223] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread
00:32:10.043  [2024-11-20 14:42:48.958202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.958239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:32:10.043  [2024-11-20 14:42:48.958270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3.986 ms
00:32:10.043  [2024-11-20 14:42:48.958282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.958321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.958336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Decorate bands
00:32:10.043  [2024-11-20 14:42:48.958348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:10.043  [2024-11-20 14:42:48.958359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.958411] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0
00:32:10.043  [2024-11-20 14:42:48.958441] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes
00:32:10.043  [2024-11-20 14:42:48.958483] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes
00:32:10.043  [2024-11-20 14:42:48.958506] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes
00:32:10.043  [2024-11-20 14:42:48.958640] upgrade/ftl_sb_v5.c:  92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes
00:32:10.043  [2024-11-20 14:42:48.958661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes
00:32:10.043  [2024-11-20 14:42:48.958677] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes
00:32:10.043  [2024-11-20 14:42:48.958691] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity:         20480.00 MiB
00:32:10.043  [2024-11-20 14:42:48.958711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity:       5120.00 MiB
00:32:10.043  [2024-11-20 14:42:48.958723] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries:                    3774873
00:32:10.043  [2024-11-20 14:42:48.958734] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size:               4
00:32:10.043  [2024-11-20 14:42:48.958744] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages:           2048
00:32:10.043  [2024-11-20 14:42:48.958755] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count            5
00:32:10.043  [2024-11-20 14:42:48.958768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.958784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize layout
00:32:10.043  [2024-11-20 14:42:48.958796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.359 ms
00:32:10.043  [2024-11-20 14:42:48.958807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.958904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.043  [2024-11-20 14:42:48.958920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Verify layout
00:32:10.043  [2024-11-20 14:42:48.958932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.071 ms
00:32:10.043  [2024-11-20 14:42:48.958942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.043  [2024-11-20 14:42:48.959058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout:
00:32:10.043  [2024-11-20 14:42:48.959082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb
00:32:10.043  [2024-11-20 14:42:48.959100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:32:10.043  [2024-11-20 14:42:48.959112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.043  [2024-11-20 14:42:48.959124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p
00:32:10.044  [2024-11-20 14:42:48.959134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      14.50 MiB
00:32:10.044  [2024-11-20 14:42:48.959155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md
00:32:10.044  [2024-11-20 14:42:48.959166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.62 MiB
00:32:10.044  [2024-11-20 14:42:48.959176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror
00:32:10.044  [2024-11-20 14:42:48.959199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.75 MiB
00:32:10.044  [2024-11-20 14:42:48.959209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md
00:32:10.044  [2024-11-20 14:42:48.959230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.38 MiB
00:32:10.044  [2024-11-20 14:42:48.959240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror
00:32:10.044  [2024-11-20 14:42:48.959261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.50 MiB
00:32:10.044  [2024-11-20 14:42:48.959271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0
00:32:10.044  [2024-11-20 14:42:48.959292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      14.88 MiB
00:32:10.044  [2024-11-20 14:42:48.959302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1
00:32:10.044  [2024-11-20 14:42:48.959336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      22.88 MiB
00:32:10.044  [2024-11-20 14:42:48.959347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2
00:32:10.044  [2024-11-20 14:42:48.959368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      30.88 MiB
00:32:10.044  [2024-11-20 14:42:48.959378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3
00:32:10.044  [2024-11-20 14:42:48.959398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      38.88 MiB
00:32:10.044  [2024-11-20 14:42:48.959408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      8.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md
00:32:10.044  [2024-11-20 14:42:48.959439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      46.88 MiB
00:32:10.044  [2024-11-20 14:42:48.959451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror
00:32:10.044  [2024-11-20 14:42:48.959471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log
00:32:10.044  [2024-11-20 14:42:48.959504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror
00:32:10.044  [2024-11-20 14:42:48.959534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      47.25 MiB
00:32:10.044  [2024-11-20 14:42:48.959546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959556] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout:
00:32:10.044  [2024-11-20 14:42:48.959581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror
00:32:10.044  [2024-11-20 14:42:48.959595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.12 MiB
00:32:10.044  [2024-11-20 14:42:48.959618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap
00:32:10.044  [2024-11-20 14:42:48.959629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      18432.25 MiB
00:32:10.044  [2024-11-20 14:42:48.959639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      0.88 MiB
00:32:10.044  [2024-11-20 14:42:48.959650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm
00:32:10.044  [2024-11-20 14:42:48.959660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] 	offset:                      0.25 MiB
00:32:10.044  [2024-11-20 14:42:48.959671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] 	blocks:                      18432.00 MiB
00:32:10.044  [2024-11-20 14:42:48.959683] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc:
00:32:10.044  [2024-11-20 14:42:48.959696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80
00:32:10.044  [2024-11-20 14:42:48.959720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800
00:32:10.044  [2024-11-20 14:42:48.959753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800
00:32:10.044  [2024-11-20 14:42:48.959764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800
00:32:10.044  [2024-11-20 14:42:48.959775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800
00:32:10.044  [2024-11-20 14:42:48.959785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060
00:32:10.044  [2024-11-20 14:42:48.959871] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev:
00:32:10.044  [2024-11-20 14:42:48.959883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20
00:32:10.044  [2024-11-20 14:42:48.959912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000
00:32:10.044  [2024-11-20 14:42:48.959923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0
00:32:10.044  [2024-11-20 14:42:48.959935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0
00:32:10.044  [2024-11-20 14:42:48.959948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.044  [2024-11-20 14:42:48.959959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Layout upgrade
00:32:10.044  [2024-11-20 14:42:48.959971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.961 ms
00:32:10.044  [2024-11-20 14:42:48.959982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.044  [2024-11-20 14:42:48.991633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.044  [2024-11-20 14:42:48.991872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:32:10.044  [2024-11-20 14:42:48.992005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 31.579 ms
00:32:10.044  [2024-11-20 14:42:48.992054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.044  [2024-11-20 14:42:48.992148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.044  [2024-11-20 14:42:48.992292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize band addresses
00:32:10.044  [2024-11-20 14:42:48.992343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.015 ms
00:32:10.044  [2024-11-20 14:42:48.992480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.302  [2024-11-20 14:42:49.033863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.302  [2024-11-20 14:42:49.034131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:32:10.302  [2024-11-20 14:42:49.034304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 41.228 ms
00:32:10.302  [2024-11-20 14:42:49.034357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.302  [2024-11-20 14:42:49.034521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.302  [2024-11-20 14:42:49.034594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:32:10.302  [2024-11-20 14:42:49.034795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:10.302  [2024-11-20 14:42:49.034845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.302  [2024-11-20 14:42:49.035082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.302  [2024-11-20 14:42:49.035207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:32:10.302  [2024-11-20 14:42:49.035232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.080 ms
00:32:10.302  [2024-11-20 14:42:49.035244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.302  [2024-11-20 14:42:49.035312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.302  [2024-11-20 14:42:49.035329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:32:10.302  [2024-11-20 14:42:49.035342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.028 ms
00:32:10.302  [2024-11-20 14:42:49.035353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.302  [2024-11-20 14:42:49.053537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.302  [2024-11-20 14:42:49.053627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:32:10.302  [2024-11-20 14:42:49.053678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 18.147 ms
00:32:10.302  [2024-11-20 14:42:49.053696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.302  [2024-11-20 14:42:49.053857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.302  [2024-11-20 14:42:49.053881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize recovery
00:32:10.303  [2024-11-20 14:42:49.053894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:32:10.303  [2024-11-20 14:42:49.053905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.082186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.082231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover band state
00:32:10.303  [2024-11-20 14:42:49.082267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 28.227 ms
00:32:10.303  [2024-11-20 14:42:49.082279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.094897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.094939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize P2L checkpointing
00:32:10.303  [2024-11-20 14:42:49.094981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.720 ms
00:32:10.303  [2024-11-20 14:42:49.094993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.167627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.167893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore P2L checkpoints
00:32:10.303  [2024-11-20 14:42:49.167934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 72.554 ms
00:32:10.303  [2024-11-20 14:42:49.167948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.168166] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8
00:32:10.303  [2024-11-20 14:42:49.168314] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9
00:32:10.303  [2024-11-20 14:42:49.168450] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12
00:32:10.303  [2024-11-20 14:42:49.168596] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0
00:32:10.303  [2024-11-20 14:42:49.168614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.168626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Preprocess P2L checkpoints
00:32:10.303  [2024-11-20 14:42:49.168639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.593 ms
00:32:10.303  [2024-11-20 14:42:49.168650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.168778] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L
00:32:10.303  [2024-11-20 14:42:49.168802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.168819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover open bands P2L
00:32:10.303  [2024-11-20 14:42:49.168832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.025 ms
00:32:10.303  [2024-11-20 14:42:49.168844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.188447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.188496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover chunk state
00:32:10.303  [2024-11-20 14:42:49.188529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 19.570 ms
00:32:10.303  [2024-11-20 14:42:49.188540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.200491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.200532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover max seq ID
00:32:10.303  [2024-11-20 14:42:49.200566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.010 ms
00:32:10.303  [2024-11-20 14:42:49.200576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.303  [2024-11-20 14:42:49.200728] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14
00:32:10.303  [2024-11-20 14:42:49.200875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.303  [2024-11-20 14:42:49.200942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, prepare
00:32:10.303  [2024-11-20 14:42:49.200960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.150 ms
00:32:10.303  [2024-11-20 14:42:49.200972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.868  [2024-11-20 14:42:49.748737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.868  [2024-11-20 14:42:49.748821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, read vss
00:32:10.868  [2024-11-20 14:42:49.748843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 546.660 ms
00:32:10.868  [2024-11-20 14:42:49.748855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.868  [2024-11-20 14:42:49.753804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.868  [2024-11-20 14:42:49.753850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, persist P2L map
00:32:10.868  [2024-11-20 14:42:49.753867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.005 ms
00:32:10.868  [2024-11-20 14:42:49.753880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.868  [2024-11-20 14:42:49.754332] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14
00:32:10.868  [2024-11-20 14:42:49.754368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.868  [2024-11-20 14:42:49.754383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, close chunk
00:32:10.868  [2024-11-20 14:42:49.754396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.437 ms
00:32:10.868  [2024-11-20 14:42:49.754407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.868  [2024-11-20 14:42:49.754452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.868  [2024-11-20 14:42:49.754471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, cleanup
00:32:10.868  [2024-11-20 14:42:49.754483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:10.868  [2024-11-20 14:42:49.754495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:10.868  [2024-11-20 14:42:49.754550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 553.825 ms, result 0
00:32:10.868  [2024-11-20 14:42:49.754626] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15
00:32:10.868  [2024-11-20 14:42:49.754713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:10.868  [2024-11-20 14:42:49.754727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, prepare
00:32:10.868  [2024-11-20 14:42:49.754739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.090 ms
00:32:10.868  [2024-11-20 14:42:49.754749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.296384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.296648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, read vss
00:32:11.435  [2024-11-20 14:42:50.296682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 540.506 ms
00:32:11.435  [2024-11-20 14:42:50.296695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.301481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.301526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, persist P2L map
00:32:11.435  [2024-11-20 14:42:50.301545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.872 ms
00:32:11.435  [2024-11-20 14:42:50.301556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.301996] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15
00:32:11.435  [2024-11-20 14:42:50.302030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.302043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, close chunk
00:32:11.435  [2024-11-20 14:42:50.302056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.418 ms
00:32:11.435  [2024-11-20 14:42:50.302066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.302116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.302135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Chunk recovery, cleanup
00:32:11.435  [2024-11-20 14:42:50.302147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:11.435  [2024-11-20 14:42:50.302158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.302208] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 547.586 ms, result 0
00:32:11.435  [2024-11-20 14:42:50.302264] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2
00:32:11.435  [2024-11-20 14:42:50.302281] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully
00:32:11.435  [2024-11-20 14:42:50.302295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.302306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Recover open chunks P2L
00:32:11.435  [2024-11-20 14:42:50.302319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1101.599 ms
00:32:11.435  [2024-11-20 14:42:50.302330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.302372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.302388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize recovery
00:32:11.435  [2024-11-20 14:42:50.302407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.001 ms
00:32:11.435  [2024-11-20 14:42:50.302418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.315006] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB
00:32:11.435  [2024-11-20 14:42:50.315203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.315222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize L2P
00:32:11.435  [2024-11-20 14:42:50.315236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.763 ms
00:32:11.435  [2024-11-20 14:42:50.315247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.316014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.316049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore L2P from shared memory
00:32:11.435  [2024-11-20 14:42:50.316069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.671 ms
00:32:11.435  [2024-11-20 14:42:50.316080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.318602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.318644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Restore valid maps counters
00:32:11.435  [2024-11-20 14:42:50.318659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 2.493 ms
00:32:11.435  [2024-11-20 14:42:50.318670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.318721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.318737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Complete trim transaction
00:32:11.435  [2024-11-20 14:42:50.318749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.002 ms
00:32:11.435  [2024-11-20 14:42:50.318766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.318890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.318907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize band initialization
00:32:11.435  [2024-11-20 14:42:50.318919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.018 ms
00:32:11.435  [2024-11-20 14:42:50.318937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.318965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.318979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Start core poller
00:32:11.435  [2024-11-20 14:42:50.318990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.006 ms
00:32:11.435  [2024-11-20 14:42:50.319001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.319046] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped
00:32:11.435  [2024-11-20 14:42:50.319064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.319076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Self test on startup
00:32:11.435  [2024-11-20 14:42:50.319087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.019 ms
00:32:11.435  [2024-11-20 14:42:50.319098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.319162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:11.435  [2024-11-20 14:42:50.319178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finalize initialization
00:32:11.435  [2024-11-20 14:42:50.319190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.039 ms
00:32:11.435  [2024-11-20 14:42:50.319201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:11.435  [2024-11-20 14:42:50.320459] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1402.154 ms, result 0
00:32:11.435  [2024-11-20 14:42:50.335744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:11.435  [2024-11-20 14:42:50.351737] mngt/ftl_mngt_ioch.c:  57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000
00:32:11.435  [2024-11-20 14:42:50.360990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0
00:32:11.435  Validate MD5 checksum, iteration 1
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]]
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 ))
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1'
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:11.435   14:42:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0
00:32:11.693  [2024-11-20 14:42:50.502791] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:32:11.693  [2024-11-20 14:42:50.503216] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84420 ]
00:32:11.951  [2024-11-20 14:42:50.683923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:11.951  [2024-11-20 14:42:50.787882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:13.896  
[2024-11-20T14:42:53.443Z] Copying: 515/1024 [MB] (515 MBps)
[2024-11-20T14:42:53.701Z] Copying: 992/1024 [MB] (477 MBps)
[2024-11-20T14:42:55.073Z] Copying: 1024/1024 [MB] (average 493 MBps)
00:32:16.091  
00:32:16.091   14:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024
00:32:16.091   14:42:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:18.622    14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:32:18.622  Validate MD5 checksum, iteration 2
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=04c57546b366e418d8401b1412f8c555
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 04c57546b366e418d8401b1412f8c555 != \0\4\c\5\7\5\4\6\b\3\6\6\e\4\1\8\d\8\4\0\1\b\1\4\1\2\f\8\c\5\5\5 ]]
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2'
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock'
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]]
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0
00:32:18.622   14:42:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024
00:32:18.622  [2024-11-20 14:42:57.073066] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:32:18.622  [2024-11-20 14:42:57.073454] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84492 ]
00:32:18.622  [2024-11-20 14:42:57.245577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:18.622  [2024-11-20 14:42:57.349093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:19.993  
[2024-11-20T14:43:00.346Z] Copying: 464/1024 [MB] (464 MBps)
[2024-11-20T14:43:00.346Z] Copying: 929/1024 [MB] (465 MBps)
[2024-11-20T14:43:01.281Z] Copying: 1024/1024 [MB] (average 466 MBps)
00:32:22.299  
00:32:22.299   14:43:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048
00:32:22.299   14:43:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:24.852    14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d '
00:32:24.852   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b1b5e2ceace26bd1f777d7bb2e0fd3b3
00:32:24.852   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b1b5e2ceace26bd1f777d7bb2e0fd3b3 != \b\1\b\5\e\2\c\e\a\c\e\2\6\b\d\1\f\7\7\7\d\7\b\b\2\e\0\f\d\3\b\3 ]]
00:32:24.852   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ ))
00:32:24.852   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations ))
00:32:24.852   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84383 ]]
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84383
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84383 ']'
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84383
00:32:24.853    14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:24.853    14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84383
00:32:24.853  killing process with pid 84383
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84383'
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84383
00:32:24.853   14:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84383
00:32:25.790  [2024-11-20 14:43:04.591063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000
00:32:25.790  [2024-11-20 14:43:04.607086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.607130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinit core IO channel
00:32:25.790  [2024-11-20 14:43:04.607166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.004 ms
00:32:25.790  [2024-11-20 14:43:04.607176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.607205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread
00:32:25.790  [2024-11-20 14:43:04.610443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.610678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Unregister IO device
00:32:25.790  [2024-11-20 14:43:04.610711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 3.217 ms
00:32:25.790  [2024-11-20 14:43:04.610724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.610992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.611026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Stop core poller
00:32:25.790  [2024-11-20 14:43:04.611038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.234 ms
00:32:25.790  [2024-11-20 14:43:04.611049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.612337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.612406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist L2P
00:32:25.790  [2024-11-20 14:43:04.612421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.268 ms
00:32:25.790  [2024-11-20 14:43:04.612432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.613838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.613874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Finish L2P trims
00:32:25.790  [2024-11-20 14:43:04.613889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 1.344 ms
00:32:25.790  [2024-11-20 14:43:04.613901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.626776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.626819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist NV cache metadata
00:32:25.790  [2024-11-20 14:43:04.626836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.834 ms
00:32:25.790  [2024-11-20 14:43:04.626855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.633768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.633810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist valid map metadata
00:32:25.790  [2024-11-20 14:43:04.633826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 6.869 ms
00:32:25.790  [2024-11-20 14:43:04.633838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.633925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.633945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist P2L metadata
00:32:25.790  [2024-11-20 14:43:04.633958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.043 ms
00:32:25.790  [2024-11-20 14:43:04.633970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.646579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.646641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist band info metadata
00:32:25.790  [2024-11-20 14:43:04.646691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.580 ms
00:32:25.790  [2024-11-20 14:43:04.646703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.659279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.659314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist trim metadata
00:32:25.790  [2024-11-20 14:43:04.659344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.534 ms
00:32:25.790  [2024-11-20 14:43:04.659354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.671644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.671684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Persist superblock
00:32:25.790  [2024-11-20 14:43:04.671700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 12.251 ms
00:32:25.790  [2024-11-20 14:43:04.671711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.790  [2024-11-20 14:43:04.683292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.790  [2024-11-20 14:43:04.683328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Set FTL clean state
00:32:25.790  [2024-11-20 14:43:04.683359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 11.480 ms
00:32:25.791  [2024-11-20 14:43:04.683368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.683406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity:
00:32:25.791  [2024-11-20 14:43:04.683436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   1:   261120 / 261120 	wr_cnt: 1	state: closed
00:32:25.791  [2024-11-20 14:43:04.683468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   2:   261120 / 261120 	wr_cnt: 1	state: closed
00:32:25.791  [2024-11-20 14:43:04.683481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   3:     2048 / 261120 	wr_cnt: 1	state: closed
00:32:25.791  [2024-11-20 14:43:04.683492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   4:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   5:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   6:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   7:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   8:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band   9:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  10:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  11:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  12:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  13:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  14:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  15:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  16:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  17:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl]  Band  18:        0 / 261120 	wr_cnt: 0	state: free
00:32:25.791  [2024-11-20 14:43:04.683704] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 
00:32:25.791  [2024-11-20 14:43:04.683715] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID:         0cdf9912-e7ac-42d1-8302-7706ba869860
00:32:25.791  [2024-11-20 14:43:04.683727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs:    524288
00:32:25.791  [2024-11-20 14:43:04.683737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes:        320
00:32:25.791  [2024-11-20 14:43:04.683748] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes:         0
00:32:25.791  [2024-11-20 14:43:04.683759] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF:                 inf
00:32:25.791  [2024-11-20 14:43:04.683785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits:
00:32:25.791  [2024-11-20 14:43:04.683795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   crit: 0
00:32:25.791  [2024-11-20 14:43:04.683806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]   high: 0
00:32:25.791  [2024-11-20 14:43:04.683815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]    low: 0
00:32:25.791  [2024-11-20 14:43:04.683825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl]  start: 0
00:32:25.791  [2024-11-20 14:43:04.683850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.791  [2024-11-20 14:43:04.683868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Dump statistics
00:32:25.791  [2024-11-20 14:43:04.683880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.446 ms
00:32:25.791  [2024-11-20 14:43:04.683891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.700866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.791  [2024-11-20 14:43:04.700903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize L2P
00:32:25.791  [2024-11-20 14:43:04.700918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 16.951 ms
00:32:25.791  [2024-11-20 14:43:04.700929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.701429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action
00:32:25.791  [2024-11-20 14:43:04.701447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Deinitialize P2L checkpointing
00:32:25.791  [2024-11-20 14:43:04.701460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.472 ms
00:32:25.791  [2024-11-20 14:43:04.701471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.755547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:25.791  [2024-11-20 14:43:04.755636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize reloc
00:32:25.791  [2024-11-20 14:43:04.755657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:25.791  [2024-11-20 14:43:04.755669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.755732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:25.791  [2024-11-20 14:43:04.755747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands metadata
00:32:25.791  [2024-11-20 14:43:04.755760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:25.791  [2024-11-20 14:43:04.755771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.755932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:25.791  [2024-11-20 14:43:04.755952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize trim map
00:32:25.791  [2024-11-20 14:43:04.755965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:25.791  [2024-11-20 14:43:04.755992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:25.791  [2024-11-20 14:43:04.756016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:25.791  [2024-11-20 14:43:04.756047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize valid map
00:32:25.791  [2024-11-20 14:43:04.756058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:25.791  [2024-11-20 14:43:04.756069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.049  [2024-11-20 14:43:04.860073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.049  [2024-11-20 14:43:04.860310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize NV cache
00:32:26.050  [2024-11-20 14:43:04.860339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.860356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.944480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.944733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize metadata
00:32:26.050  [2024-11-20 14:43:04.944764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.944777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.944906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.944926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize core IO channel
00:32:26.050  [2024-11-20 14:43:04.944939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.944951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.945016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.945035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize bands
00:32:26.050  [2024-11-20 14:43:04.945055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.945095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.945245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.945265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize memory pools
00:32:26.050  [2024-11-20 14:43:04.945277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.945288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.945338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.945355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Initialize superblock
00:32:26.050  [2024-11-20 14:43:04.945367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.945383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.945427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.945441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open cache bdev
00:32:26.050  [2024-11-20 14:43:04.945452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.945463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.945513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback
00:32:26.050  [2024-11-20 14:43:04.945529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] 	 name:     Open base bdev
00:32:26.050  [2024-11-20 14:43:04.945545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] 	 duration: 0.000 ms
00:32:26.050  [2024-11-20 14:43:04.945555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] 	 status:   0
00:32:26.050  [2024-11-20 14:43:04.945723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 338.589 ms, result 0
00:32:27.425   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid
00:32:27.425   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]]
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json
00:32:27.426  Remove shared memory files
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84146
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f
00:32:27.426  ************************************
00:32:27.426  END TEST ftl_upgrade_shutdown
00:32:27.426  ************************************
00:32:27.426  
00:32:27.426  real	1m34.650s
00:32:27.426  user	2m17.256s
00:32:27.426  sys	0m23.206s
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:27.426   14:43:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x
00:32:27.426  Process with pid 77028 is not found
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]]
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@14 -- # killprocess 77028
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@954 -- # '[' -z 77028 ']'
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@958 -- # kill -0 77028
00:32:27.426  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77028) - No such process
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77028 is not found'
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]]
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84615
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:32:27.426   14:43:06 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84615
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@835 -- # '[' -z 84615 ']'
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:27.426  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:27.426   14:43:06 ftl -- common/autotest_common.sh@10 -- # set +x
00:32:27.426  [2024-11-20 14:43:06.198686] Starting SPDK v25.01-pre git sha1 5c8d99223 / DPDK 24.03.0 initialization...
00:32:27.426  [2024-11-20 14:43:06.198868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84615 ]
00:32:27.426  [2024-11-20 14:43:06.377447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:32:27.684  [2024-11-20 14:43:06.473001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:32:28.251   14:43:07 ftl -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:28.251   14:43:07 ftl -- common/autotest_common.sh@868 -- # return 0
00:32:28.251   14:43:07 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0
00:32:28.817  nvme0n1
00:32:28.817   14:43:07 ftl -- ftl/ftl.sh@22 -- # clear_lvols
00:32:28.817    14:43:07 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores
00:32:28.817    14:43:07 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid'
00:32:29.075   14:43:07 ftl -- ftl/common.sh@28 -- # stores=8a588347-7bb2-4e5a-9127-464ce4c9fc90
00:32:29.075   14:43:07 ftl -- ftl/common.sh@29 -- # for lvs in $stores
00:32:29.075   14:43:07 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a588347-7bb2-4e5a-9127-464ce4c9fc90
00:32:29.334   14:43:08 ftl -- ftl/ftl.sh@23 -- # killprocess 84615
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 84615 ']'
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@958 -- # kill -0 84615
00:32:29.334    14:43:08 ftl -- common/autotest_common.sh@959 -- # uname
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:29.334    14:43:08 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84615
00:32:29.334  killing process with pid 84615
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84615'
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@973 -- # kill 84615
00:32:29.334   14:43:08 ftl -- common/autotest_common.sh@978 -- # wait 84615
00:32:31.234   14:43:10 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:32:31.492  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:32:31.492  Waiting for block devices as requested
00:32:31.492  0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme
00:32:31.492  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:32:31.750  0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme
00:32:31.750  0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme
00:32:37.021  * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing
00:32:37.021  Remove shared memory files
00:32:37.021   14:43:15 ftl -- ftl/ftl.sh@28 -- # remove_shm
00:32:37.021   14:43:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files
00:32:37.021   14:43:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f
00:32:37.021   14:43:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f
00:32:37.021   14:43:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f
00:32:37.021   14:43:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi
00:32:37.021   14:43:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f
00:32:37.021  ************************************
00:32:37.021  END TEST ftl
00:32:37.021  ************************************
00:32:37.021  
00:32:37.021  real	11m27.351s
00:32:37.021  user	14m35.646s
00:32:37.021  sys	1m34.037s
00:32:37.021   14:43:15 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:37.021   14:43:15 ftl -- common/autotest_common.sh@10 -- # set +x
00:32:37.021   14:43:15  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:32:37.021   14:43:15  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:32:37.021   14:43:15  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:32:37.021   14:43:15  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:32:37.021   14:43:15  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:32:37.021   14:43:15  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:32:37.021   14:43:15  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:32:37.021   14:43:15  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:32:37.021   14:43:15  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:32:37.021   14:43:15  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:32:37.021   14:43:15  -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:37.021   14:43:15  -- common/autotest_common.sh@10 -- # set +x
00:32:37.021   14:43:15  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:32:37.021   14:43:15  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:32:37.021   14:43:15  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:32:37.021   14:43:15  -- common/autotest_common.sh@10 -- # set +x
00:32:38.927  INFO: APP EXITING
00:32:38.927  INFO: killing all VMs
00:32:38.927  INFO: killing vhost app
00:32:38.927  INFO: EXIT DONE
00:32:38.927  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:32:39.494  0000:00:11.0 (1b36 0010): Already using the nvme driver
00:32:39.494  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:32:39.494  0000:00:12.0 (1b36 0010): Already using the nvme driver
00:32:39.494  0000:00:13.0 (1b36 0010): Already using the nvme driver
00:32:39.752  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev
00:32:40.321  Cleaning
00:32:40.321  Removing:    /var/run/dpdk/spdk0/config
00:32:40.321  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:32:40.321  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:32:40.321  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:32:40.321  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:32:40.321  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:32:40.321  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:32:40.321  Removing:    /var/run/dpdk/spdk0
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58048
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58272
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58501
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58604
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58656
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58784
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid58802
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59012
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59117
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59215
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59337
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59446
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59486
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59522
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59598
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid59712
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60194
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60269
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60343
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60359
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60507
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60523
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60669
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60691
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60755
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60773
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60837
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid60866
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61061
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61098
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61187
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61380
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61465
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61511
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid61994
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid62098
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid62213
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid62270
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid62297
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid62381
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63018
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63061
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63589
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63693
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63813
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63866
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63892
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid63922
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid65812
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid65955
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid65964
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid65976
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66022
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66026
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66038
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66083
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66092
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66104
00:32:40.321  Removing:    /var/run/dpdk/spdk_pid66149
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid66153
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid66165
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid67584
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid67692
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid69098
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid70838
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid70918
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71000
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71112
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71204
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71306
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71387
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71463
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71567
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71670
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71766
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71846
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid71921
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72031
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72123
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72225
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72304
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72388
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72494
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72592
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72691
00:32:40.322  Removing:    /var/run/dpdk/spdk_pid72772
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid72846
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid72921
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid72995
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73104
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73202
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73297
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73375
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73451
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73531
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73605
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73713
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73812
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid73956
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid74240
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid74278
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid74767
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid74952
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75051
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75161
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75209
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75240
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75531
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75594
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid75680
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid76101
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid76242
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid77028
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid77177
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid77392
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid77491
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid77866
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid78151
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid78503
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid78707
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid78832
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid78896
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid79044
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid79076
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid79140
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid79346
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid79594
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid79958
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid80386
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid80778
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid81281
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid81419
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid81522
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid82157
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid82239
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid82638
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83043
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83540
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83667
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83720
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83790
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83852
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid83922
00:32:40.581  Removing:    /var/run/dpdk/spdk_pid84146
00:32:40.582  Removing:    /var/run/dpdk/spdk_pid84225
00:32:40.582  Removing:    /var/run/dpdk/spdk_pid84307
00:32:40.582  Removing:    /var/run/dpdk/spdk_pid84383
00:32:40.582  Removing:    /var/run/dpdk/spdk_pid84420
00:32:40.582  Removing:    /var/run/dpdk/spdk_pid84492
00:32:40.582  Removing:    /var/run/dpdk/spdk_pid84615
00:32:40.582  Clean
00:32:40.582   14:43:19  -- common/autotest_common.sh@1453 -- # return 0
00:32:40.582   14:43:19  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:32:40.582   14:43:19  -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:40.582   14:43:19  -- common/autotest_common.sh@10 -- # set +x
00:32:40.841   14:43:19  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:32:40.841   14:43:19  -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:40.841   14:43:19  -- common/autotest_common.sh@10 -- # set +x
00:32:40.841   14:43:19  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:32:40.841   14:43:19  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:32:40.841   14:43:19  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:32:40.841   14:43:19  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:32:40.841    14:43:19  -- spdk/autotest.sh@398 -- # hostname
00:32:40.841   14:43:19  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:32:41.100  geninfo: WARNING: invalid characters removed from testname!
00:33:13.254   14:43:46  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:13.254   14:43:50  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:15.156   14:43:53  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:18.466   14:43:56  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:20.999   14:43:59  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:23.531   14:44:02  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:33:26.817   14:44:05  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:33:26.817   14:44:05  -- spdk/autorun.sh@1 -- $ timing_finish
00:33:26.817   14:44:05  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:33:26.817   14:44:05  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:33:26.817   14:44:05  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:33:26.817   14:44:05  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:33:26.817  + [[ -n 5294 ]]
00:33:26.817  + sudo kill 5294
00:33:26.826  [Pipeline] }
00:33:26.843  [Pipeline] // timeout
00:33:26.848  [Pipeline] }
00:33:26.862  [Pipeline] // stage
00:33:26.867  [Pipeline] }
00:33:26.882  [Pipeline] // catchError
00:33:26.892  [Pipeline] stage
00:33:26.895  [Pipeline] { (Stop VM)
00:33:26.907  [Pipeline] sh
00:33:27.186  + vagrant halt
00:33:30.496  ==> default: Halting domain...
00:33:37.076  [Pipeline] sh
00:33:37.356  + vagrant destroy -f
00:33:40.643  ==> default: Removing domain...
00:33:40.915  [Pipeline] sh
00:33:41.195  + mv output /var/jenkins/workspace/nvme-vg-autotest/output
00:33:41.204  [Pipeline] }
00:33:41.219  [Pipeline] // stage
00:33:41.226  [Pipeline] }
00:33:41.240  [Pipeline] // dir
00:33:41.246  [Pipeline] }
00:33:41.262  [Pipeline] // wrap
00:33:41.269  [Pipeline] }
00:33:41.281  [Pipeline] // catchError
00:33:41.290  [Pipeline] stage
00:33:41.292  [Pipeline] { (Epilogue)
00:33:41.305  [Pipeline] sh
00:33:41.584  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:33:48.162  [Pipeline] catchError
00:33:48.165  [Pipeline] {
00:33:48.178  [Pipeline] sh
00:33:48.460  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:33:48.718  Artifacts sizes are good
00:33:48.727  [Pipeline] }
00:33:48.741  [Pipeline] // catchError
00:33:48.751  [Pipeline] archiveArtifacts
00:33:48.758  Archiving artifacts
00:33:48.870  [Pipeline] cleanWs
00:33:48.881  [WS-CLEANUP] Deleting project workspace...
00:33:48.881  [WS-CLEANUP] Deferred wipeout is used...
00:33:48.887  [WS-CLEANUP] done
00:33:48.889  [Pipeline] }
00:33:48.904  [Pipeline] // stage
00:33:48.910  [Pipeline] }
00:33:48.924  [Pipeline] // node
00:33:48.930  [Pipeline] End of Pipeline
00:33:48.979  Finished: SUCCESS